Having received the Oculus Rift development kit last month Virtual Architectures has spent the time exploring existing Oculus ready games and demos to get a sense of what works in VR and what doesn’t. In the meantime we’ve also completed the crucial first phase of technical testing by creating a scene and exploring it with the Oculus Rift headset as you can see in the teaser video above. We decided not to take you on the tour inside just yet.
The main aim of testing at this stage is to establish that there won’t be any unexpected problems in the proposed workflow for developing the final Panopticon Project experience:
- 3D Modelling in Sketchup
- Export Model to FBX and import into Unity
- Assemble the scene and integrate Oculus Rift controller
- Optimise the scene
- Build the final Unity application
We were really please with the result.
The Panopticon model shown in the video was created using the free modelling program Sketchup Make. After a few teething problems importing the model to Unity worked fine. To speed up the test we reused the terrain, skybox and background sounds from the existing Tuscany Demo provided by Oculus VR and Unity. Setting up the Oculus Rift controller was surprisingly easy.
If you are wondering why there are two images side by side in the video this is because the Oculus Rift headset displays a slightly different image to each eye. The two images of the virtual environment are adjusted to match the estimated ‘interpupillary distance’ of the users eyes. This convinces the brain that each eye is seeing a single unified object from the appropriate perspective and simulates our ordinary experience of three dimensional depth. Each image is displayed in the headset with a ‘barrel distortion’ that counteracts the effect of the lenses in the Oculus Rift headset which are designed to expand the image to cover the user’s full field of view and improve their sense of visual immersion in the virtual scene.
In the process of taking apart the Tuscany demo to set up the scene we picked up lots of tips for optimisation in Unity. Technical areas requiring further investigation are as follows:
- Lightmapping and Dynamic Shadows – Determines how shadows are created (with various benefits and tradeoffs for performance)
- Shaders – Controls how light interacts with model surfaces to give a realistic feel
- Occlusion Culling – Hiding geometry that isn’t being looked at by the user to reduce computer processing and improve performance
At this stage work is ready to commence on building the final Panopticon model and creating a suitable landscape environment to provide context. This will be informed by research undertaken using Jeremy Bentham’s own Panopticon Writings along with Janet Semple’s book Bentham’s Prison: A Study of the Panopticon Penitentiary.
On to Phase 2!
Leave a Reply