Source code and assets: https://zenva.com/UniteIndia2018 In Unity, a camera is a device that captures and displays the game world to the user. But the way you work with cameras in VR is different to that of normal 3D games and applications, so that is the first thing I want to address.
In a non-VR game you view the game world through the screen of your computer, if the player moves and/or rotates the computer, the content of the screen doesn't change.
If they want to look around in such a game, they have to use the mouse or keyboard to rotate the camera. Also, there will be times when they are not in control, for example in a cut scene, where the camera might show the action from different angles.
VR apps work differently. The game world is seen through your HMD. The user can look around, which will in turn rotate or translate the camera in the virtual world. The rotation of the camera is read directly from the headset. If you have say a dialog between two characters, in a non-VR game you can show the character who is talking. On a VR experience it's up to the player where they want to look at.
In real life it would be quite invasive if someone forced your head to look at a certain direction. The same applies to VR. If you try to force a camera rotation onto your users they will most likely abandon your app instantly.
The same thing applies for hand-controllers This brings us to the concept of XR camera rig, which is how we represent the user in VR
Think of the camera rig as the body of the player. In real life if you hop on a vehicle, the vehicle moves your whole body, it doesn’t move your head or your hands. If the vehicle rotates, it will rotate your whole body.
That is the approach we take when we are developing for VR and AR
What we’ll do now is go to Unity so that I can show you how you can get started
In Unity, a camera is a device that captures and displays the game world to the user. But the way you work with cameras in VR is different to that of normal 3D games and applications, so that is the first thing I want to address.
In a non-VR game you view the game world through the screen of your computer, if the player moves and/or rotates the computer, the content of the screen doesn't change.
If they want to look around in such a game, they have to use the mouse or keyboard to rotate the camera. Also, there will be times when they are not in control, for example in a cut scene, where the camera might show the action from different angles.
VR apps work differently. The game world is seen through your HMD. The user can look around, which will in turn rotate or translate the camera in the virtual world. The rotation of the camera is read directly from the headset.
If you have say a dialog between two characters, in a non-VR game you can show the character who is talking. On a VR experience it's up to the player where they want to look at.
In real life it would be quite invasive if someone forced your head to look at a certain direction. The same applies to VR. If you try to force a camera rotation onto your users they will most likely abandon your app instantly.
The same thing applies for hand-controllers
This brings us to the concept of XR camera rig, which is how we represent the user in VR
Think of the camera rig as the body of the player. In real life if you hop on a vehicle, the vehicle moves your whole body, it doesn’t move your head or your hands. If the vehicle rotates, it will rotate your whole body.
That is the approach we take when we are developing for VR and AR
What we’ll do now is go to Unity so that I can show you how you can get started
Create a new project using the template VR Lightweight RP (Preview). This will allow us to utilize the new Lightweight Render Pipeline, which provides better performance for VR applications. The template will also include a script we’ll be using.
We are now able to position ourselves in a virtual world, but how can we make it interactive?
There are many different ways in which people can interact with a VR environment. Some of them resemble how we interact with physical objects in real life, whilst some others are much more abstract. The methods of interaction that are available to you as a developer will depend on the hardware you are using and it's tracking capabilities.
Pointing at objects and pressing a button is one of the most common and simplest forms of interaction, as it works on even the most limited headsets. The analogy here is that of hovering the mouse over something.
6-DOF controllers allow for interactions more akin to those of real life: touching, grabbing, carrying, pulling, pushing. The list of verbs goes on and on.
What we’ll do now is develop a simple system to interact with objects in VR
We are gonna start creating the interactable class, this is the end result
We will have 3 events, one when we bring our laser pointer over the object, one when we move it out, and one with we are pointing at it and press a button