27. What is YOUR strategy for...
•Virtual Reality
•A/R
•Synthetic Reality
•Mixed Reality
•360° Video
•…
28. Where is VR being used today to add value?
● Used to visualize architectural plans and
resolve some litigation
● Used by physicians before major surgery
● Banks are exploring VR as an alternative
to physical places
● Portfolio visualization and display of
quantitative and qualitative data, data
manipulation, data visualization
● Telling historical stories
● Social experiences (including dating)
● Games and entertainment (porn, etc.)
● Used by marketers for marketing
everything - new movies, magazines and
more
29. Information Challenges in VR
● Where will the user be?
● Can the user read the text?
● Does the user navigate to the information
or do we present the information upon
request?
● How much time do we have with the
user?
● What system constraints do we have?
● How do we ensure the user is looking
where we need them to look?
● How do I ensure the user understands
what to do with the object?
● Is the information shared?
● Are we simulating a historical event, an
architectural possibility, a crash scene,
etc.
30. We're VR enthusiasts. We love it.
The future is being built. Why not go
build something, too?
heather@arkitome.com
31. Resources
Hierarchy of needs in virtual reality https://medium.com/@beaucronin/the-hierarchy-of-needs-in-virtual-reality-
development-4333a4833acc#.azez4jma2
Cognitive flow http://www.gamasutra.com/view/feature/166972/cognitive_flow_the_psychology_of_.
php
Game UI design http://devmag.org.za/2011/02/02/video-game-user-interface-design-diegesis-theory/
VR Interface Design Pre-Visualisation
Methods (Mike Alger)
https://youtu.be/id86HeV-Vb8
Gamer motivation GDC talk http://quanticfoundry.com/2016/04/07/gdc-talk/
UX of VR http://www.uxofvr.com/
Editor's Notes
Good afternoon! I want to get started right on time because we have a lot to cover in the next 20 minutes. I’m Heather, this is Eric and we are the core team behind Arkitome. And for those of you who don’t know, Bacon Fest is happening today and there’s a western theme this year, so good times!
There are a lot of things that we could cover in terms of designing and delivering information in VR and I wouldn’t be able to cover it all, even if I had all day to talk to you. So I’m going to go through and touch quickly on a lot of high-level points. We’re going to go through a lot of information and move pretty quickly so we get to it all.
We’ll start with an overview of VR and related technologies to lay some groundwork.
Gone are the days of a bounding box. With VR our canvas is essentially infinite. We’ll be talking a lot today about how users approach these spaces because designing for a flat screen and designing for an immersive space are completely different approaches. Instead of dealing with real estate that looks like this, we’re working with something like this:
“VR” seems to have entered the vernacular and morphed into a catchall for a lot of things, ranging from 360 degree video to augmented reality, mixed reality, shared reality and so on. The ideas we’re going to talk about today apply for the most part to all of these platforms.
This particular video is really neat. In this shot we have one person wearing a Vive headset with controllers, and the video is recorded from the perspective of a second person using Microsoft HoloLens. Both users are active in the same space so we have virtual reality and augmented reality coming together in a shared reality experience.
Because the market is so hot right now there’s been an explosion of content viewing hardware for VR. We have head-mounted displays such as gearVR, Rift, Vive, HoloLens, and so on. All the way down to something like this: This is People VR. This was in the checkout line at the grocery store. The cardboard with lenses here came folded neatly inside, you tear them out, put it together grab the app from the app store and stick your phone in there. My 11 year old put this together. It’s a fantastic marketing tactic.
Talking about information architecture in VR today, we’ll also need to touch a little on user experience design. We need to understand how the user will access the information and some basics of player behavior in order to effectively structure information. VR is more than video games at this point, but video games are a large segment of the VR market and they are our focus at Arkitome, so we’ll talk about game design but we will make an effort to point out concepts that apply to all rich media. Also, video games provide us with an extensive library and history of digital 3D spaces. So when looking for frameworks of thought or best practices, we lean heavily on games.
Another good source of reference material is in the field of theme park design, which frankly is much more similar to room scale VR than 3D games are. In theme parks and in any form of VR information is a key component in the way a space creates flow and movement as visitors go through it. Information also lays the groundwork for interactivity.
Let’s talk a little about our users. There’s a hierarchy of needs in virtual reality. Notice how interpretability is our MVP here. Effective information architecture will impact the top three tiers of this pyramid, starting with, does this place make sense right up through enjoyment and wanting to come back for more.
The concept of cognitive flow in games has been around for a long time and it holds for VR. This concept of flow is a little different than what we talked about in the last slide, where we were thinking about actual movement as in how people actually walk through a space. Cognitive flow is a key component of immersion. Information design will have a big impact on cognitive flow, particularly with regards to elimination of distractions and communicating goals and rules. Users need to know how to find this information and how to get back to it if they have questions later. There should be no extraneous information to interfere with concentration and like other applications information placement should be intuitive so the player doesn’t disengage or come out of immersion due to confusion.
Player motivation matters for more than just games. These categories developed by Nick Yee are consistent across rich media interactions and they impact the way people approach the experience, the way they look for information, and the types of information they expect and need to find delight and satisfaction with the experience.
In a game, as opposed to a non-interactive experience, it’s important to remember that the player is not passive, they are part of the scene and will expect to be able to both interact with and impact the environment around them.
There are a few different models for thinking about information in games if designing for games is new to you. This one is pretty useful. We can classify information based on the answer to these two questions: Is it part of the game story? Is it part of the game world?
If it’s not part of the story, and not part of the world, it would live in something like an on-screen menu or a HUD.
Information that’s part of the game world but not part of the story would have a spatial representation; think of an aura or halo around objects that indicate player can interact with or has control of.
Information that’s part of the game story but not the world would have a meta representation. Think of things that interact with the fourth wall like, blood splatter that hits your screen or something like haptic feedback where your controllers vibrate.
Information that’s part of both the world and the story will have a narrative representation. Things like damage to buildings that happens as a result of user action, a health bar that’s part of a suit as opposed to the common healthbar that hovers in space over a character's head, and so on.
Going back to theme park design for a moment… There is what's known as a cascade of questions that people have when they enter a new space. It’s always the same. And the first question is, “Where am I?” With VR, the user typically starts in a virtual waiting room. When your title loads you have your first interaction point. This space establishes overall tone and it absolutely has to transport your user. It’s recommended that we give users at least 30 seconds to acclimate to the VR environment before they have to do anything.
Like other applications we also have to provide roadmaps, both for the 3D world and to answer questions like ‘where can i go’ and what can/should i do?’ In terms of guiding viewer attention: Oculus Story Studio's Lost experimented with this by providing the viewer with a firefly guide, which appeared on a specific part of the screen to draw attention prior to the action occurring. There are other devices to direct attention: lighting cues, sound cues, the focal point of a character onscreen or even verbal/action cues. (note: VR does not translate well in screenshots).
Going back to theme park design for a moment… There is what's known as a cascade of questions that people have when they enter a new space. It’s always the same. And the first question is, “Where am I?” With VR, the user typically starts in a virtual waiting room. When your title loads you have your first interaction point. This space establishes overall tone and it absolutely has to transport your user. It’s recommended that we give users at least 30 seconds to acclimate to the VR environment before they have to do anything.
Like other applications we also have to provide roadmaps, both for the 3D world and to answer questions like ‘where can i go’ and what can/should i do?’ In terms of guiding viewer attention: Oculus Story Studio's Lost experimented with this by providing the viewer with a firefly guide, which appeared on a specific part of the screen to draw attention prior to the action occurring. There are other devices to direct attention: lighting cues, sound cues, the focal point of a character onscreen or even verbal/action cues. (note: VR does not translate well in screenshots).
Thanks to the work of Mike Algar and others we have some nice maps of where in space to place information in relation to the user.
The no-zone is about a half a meter diameter around the player - you can’t put anything here. It’s too close.
The main content zone is directly in front of your player. The peripheral zones are how far someone can comfortably look to the side to see things, and the curiosity zone is where the user would have to actually turn their body to see what’s going on there.
We can comfortably look down about 12 degrees and up about 20 degrees.
Hierarchy is established by placing things closer to the cone of focus directly in front of the player.
A few slides ago we talked about narrative, non-narrative, and spatial representation. What is generally becoming accepted in the VR space is that non-narrative forms like HUDs don’t work in VR; it’s too close and our eyes aren't able to focus on something so close. It seems like a brilliant idea is to put non-narrative HUD-like elements on the player's viewport. But a HUD will fall inside that no-zone we saw in the last slide. No matter how this is done, you will encounter trouble with players who cannot read the text, cannot adjust the fonts, or whose particular optical sizing does not work. Spatial UI is generally recommended as an alternative solution to both narrative and non-narrative user interfaces. This is a huge difference from designing for screens. You can’t just snap the interface to the user’s face; it has to be out there in the world somewhere.
Game menus in VR have a wide array of variation and experimentation. For titles where "gaze" is the primary input device, the menu has to be trimmed down to very few options as in DisneyVR.
. In Call of Starseed, the in game level select menu are 1980s cassette tapes.
GearVR has an application launch pad of experiences for an interactive menu. It’s also based on gaze, but you have to actually touch a physical button to select.
Space Pirate Trainer attempts an in-game, three dimensional representation of multiple points of data the player would find relevant.
Tilt brush positions all the in game menu interface elements on the control wands.
In Call of Starseed, the in game level select menu are 1980s cassette tapes.
GearVR has an application launch pad of experiences for an interactive menu. It’s also based on gaze, but you have to actually touch a physical button to select.
Space Pirate Trainer attempts an in-game, three dimensional representation of multiple points of data the player would find relevant as well as selection options.
Tilt brush positions all the in game menu interface elements on the control wands.
Locomotion is a critical ingredient to what the industry refers to as simulator sickness. Locomotion, that is, how human beings move in virtual spaces is a fundamental component of virtual reality. Doing it wrong means people get physically sick. Much like this website. There are ways to limit sickness and maintain the experience you want to deliver. For example, Google Earth VR users peripheral constriction. In flight mode, the periphery of the player constricts.
The VR frontier is exponentially growing with experimentation around modes of movement.
Teleport has become quite popular. This method has the VR camera turn off, relocate the player, and then turn the camera back on.
You’ve probably seen VR systems with treadmills like this one flying contraptions, bikes, climbing, and running in place. Many of these systems are available as part of frameworks emerging in developer niche VR communities.
flying contraptions
bikes
...climbing, and running in place. Many of these systems are available as part of frameworks emerging in VR developer communities.
Another of the player questions we have to answer is ‘Who am I?’ VR has a harder time with this than augmented reality, because the player character just doesn't’ behave enough like our natural form to provide a seamless experience. There are ways to make floating hands feel a little more natural, but frankly the most successful player characters are really different in some way than our bodies. For example, arms that can stretch way out when you fling them with the controller add a lot of fun to the experience, and they free us from having to try to perfectly re-create reality. Add JOhn Wick picture
So, this is actually a pretty old infographic, it’s from 2012, but we haven’t gone backwards from here in the last few years. We can engage cross-platform consumers more deeply with a transmedia strategy that offers different ways to get story information and interact with the world across platforms rather than simply providing the same experience on every platform.
We have to consider asymmertical experiences, as we saw in the shared reality slide, where multiple users are in the space via different screens, and structure our delivery of information so that each user has a rich and useful and delightful experience. There’s a great example of multi-screen interaction in the VR Museum of Fine Art experience. It’s available for Vive right now and the designer did something really cool with the implementation of the tip jar. The experience is a re-creation of the Museum of Fine Art, so you go in, you can walk around and get up close with some exhibits, there are informational placards, and so on. There are also, scattered around the space, tip jars on pedestals. If you go up to a tip jar and tap it with your controller, a website will load on your desktop to take you to paypal to donate. It’s a really clever way to get around the lack of in-app purchase capability for titles on Steam.
As humans we rely on multiple senses to process information and make decisions. With VR you can take something abstract, like data manipulation and make it visceral and have a greater impact on user behavior that way.
Where will the user be?Can the user read the text?Does the user navigate to the information or do we present the information upon request?How much time do we have with the user?What system constraints do we have?How do we ensure the user is looking where we need them to look?How do I ensure the user understands what to do with the object? (john wick earpiece exampleIs the information shared? (e.g., recroom, multiplayer, etc)Are we simulating a historical event, an architectural possibility, a crash scene etc --
I want to end with a clarification. The title of our talk is Information and experience design in 360°. As we can see from everything presented here, that’s not quite accurate. It’s really design in infinite space. If you have questions feel free to get in touch by email or here this afternoon. I love hearing about projects you’re working on and talking about possibilities and ideas, so conversation is always welcome.