Diese Präsentation wurde erfolgreich gemeldet.

Gametech orlandothefutureofvirtualworlds

1

Teilen

Nächste SlideShare
SU Talk - Rotary - script
SU Talk - Rotary - script
Wird geladen in …3
×
1 von 39
1 von 39

Gametech orlandothefutureofvirtualworlds

1

Teilen

Herunterladen, um offline zu lesen

Join us for a special date and time as Metanomics broadcasts live from Gametech, the annual military conference on games and virtual worlds for training and simulation.

Virtual worlds have become an important technology to support training and community outreach. But over the past several years, changes in the virtual world industry have opened up new choices while closing others. Advances like the consumer adoption of Microsoft Kinect, widening use of Unity 3D, and the coming changes to the browser with the launch of HTML-5 and WebGL are opening up a new range of options.

Click here to watch the video:
http://www.metanomics.net/show/march_24th_live_from_gametech_orlando_-_the_future_of_virtual_worlds/

Join us for a special date and time as Metanomics broadcasts live from Gametech, the annual military conference on games and virtual worlds for training and simulation.

Virtual worlds have become an important technology to support training and community outreach. But over the past several years, changes in the virtual world industry have opened up new choices while closing others. Advances like the consumer adoption of Microsoft Kinect, widening use of Unity 3D, and the coming changes to the browser with the launch of HTML-5 and WebGL are opening up a new range of options.

Click here to watch the video:
http://www.metanomics.net/show/march_24th_live_from_gametech_orlando_-_the_future_of_virtual_worlds/

Weitere Verwandte Inhalte

Ähnliche Bücher

Kostenlos mit einer 14-tägigen Testversion von Scribd

Alle anzeigen

Ähnliche Hörbücher

Kostenlos mit einer 14-tägigen Testversion von Scribd

Alle anzeigen

Gametech orlandothefutureofvirtualworlds

  1. 1. METANOMICS: GAMETECH ORLANDO: THE FUTURE OF VIRTUAL WORLDS - MARCH 24, 2011 ANNOUNCER: Metanomics is owned and operated by Remedy and Dusan Writer's Metaverse. ROBERT BLOOMFIELD: Hi. I'm Robert Bloomfield, professor at Cornell University's Johnson Graduate School of Management. Today we continue exploring Virtual Worlds in the larger sphere of social media, culture, enterprise and policy. Naturally, our discussion about Virtual Worlds takes place in a Virtual World. So join us. This is Metanomics. METANOMICS ANNOUNCER: Metanomics is filmed today in front of a live audience at our studios in Second Life. We are pleased to broadcast weekly to our event partners and to welcome discussion. We use ChatBridge technology to allow viewers to comment during the show. Metanomics is sponsored by the Johnson Graduate School of Management at Cornell University. Welcome. This is Metanomics. ROBERT BLOOMFIELD: Welcome, everyone, to a special Thursday morning edition of Metanomics. I am coming to you live from our studios on the Metanomics Sim in Second Life, but we are joining the Gametech 2011 Conference this morning, to hear a panel discussion featuring Metanomics' own Dusan Writer, Doug Thompson of Remedy Communications, as well as two other experts who have been mapping out the future of virtual technology and particularly the future of technology with military applications. So we're joined by Richard Boyd, of Lockheed Martin, and Dr. Mic Bowman, of Intel.
  2. 2. Before we talk much about our panelists, I thought I would just give a little bit of context surrounding the conference itself. First, although everyone refers to the conference as Gametech, it is actually the Defense Gametech Users Conference, and it does have very specifically a focus on defense applications. Now the location of the conference is in Orlando, Florida, and I thought it would be useful to go back to a transcript from--it seems like a long, long time ago, January of 2008, when I had the opportunity to interview Robert Gehorsam, who was then the president of Forterra, that was a Virtual World platform very focused on serious uses and particularly defense-oriented uses of Virtual Worlds. Robert Gehorsam introduced me to a phrase I had never heard before, that is the Military entertainment complex. So let me put this in some context. I'm going to quote Robert Gehorsam talking about University of Central Florida, which is in Atlanta [MEANT ORLANDO], and here's what Gehorsam had to say, so this is a quote, "So UCF is one of the biggest sort of unknown universities in the country certainly, and it really is for various reasons probably the simulation capital of the United States, something I didn't know until a few years ago. As some people have sort of drolly put it, it's at the center of the Military entertainment complex. "All the Services, and that's Services with a capital S--Armed Forces--have their simulation commands based in Orlando, and you also have all the theme parks. And when you look at the birth of simulation, which, really, someone noticed earlier related to flight simulators which led to motion simulators. The difference between a ride, a motion simulation ride at
  3. 3. Universal, and a flight simulator is kind of minimal. So there's an enormous amount of talent in the digital film and media departments, in the computer science departments and all throughout that area. So for those of you who are students or interested in looking at or studying in these areas, UCF would be an interesting place to look at." I bring this up because the conference is in Orlando, the center of the Military entertainment complex, and I suspect that what we'll be hearing today from our panelists is to give us an indication of where Virtual World technology is going, by the people who have the money and the incentive, really more than any other groups, to push this technology forward. So here, let me read from the Gametech 2011 website the objectives of the conference. The Defense Gametech Users Conference goals are to advance game technology and its use within the Department of Defense; provide a forum for Department of Defense game technology users to exchange ideas and experience new technologies; to inform, educate and train Department of Defense personnel on the use of game technology for military training. The objectives, and I love the Military and the government in general because they distinguish between goals and objectives, something I'm trying to teach my accounting students to do. The objectives are to provide tutorials for Department of Defense personnel, that maximize their ability to use game technology [fielded within?] Department of Defense; to provide Department of Defense personnel an update on industry and academia gaming, Virtual World and mobile application trends; to provide community at large an update on Department of Defense gaming, Virtual World and mobile application projects.
  4. 4. I feel like I've spent so much time this semester teaching my managerial accounting students here at Cornell Johnson School, let me just--an aside on goals versus objectives, I think it's useful to think of goals as being much more general and objectives as being specific, definite, concrete outcomes that you are hoping that you will achieve. So anyway, let's turn. I see we just have a couple minutes before the conference begins so we can talk a little bit about our panelists here. Let me just quickly find our run sheet. As I mentioned, Douglas Maxwell is providing some introductory remarks. We will then be hearing from Mic Bowman, of Intel, Richard Boyd, of Lockheed Martin, and Doug Thompson, of Remedy Communications. I think the audience is quite familiar with Doug Thompson who is the owner of Remedy Communications and the owner of Metanomics as well. Richard Boyd, from Lockheed Martin, has been working closely with a colleague David Smith, and I think are worth a moment of our time. Both Richard Boyd and David Smith are Lockheed Martin employees, and they believe that they are on the verge of what they're calling the holy grail of virtual reality. Upon joining Lockheed Martin a few years ago, Boyd created what they're calling--and here I'm quoting from some PR material, "an informal, internal Lockheed Martin organization called Virtual World Labs, which draws from creative expertise across the corporation." Essentially forming what Boyd calls a [no-o-sphere?], or a sphere of human thought that they can use in the Virtual World. Let me check here. I see that it is, according to my clock, 9:45. I'm not sure. I will be
  5. 5. getting a heads-up when we're about to get started. Mic Bowman, the remaining speaker in the conference, is someone who came to my attention through the wonderful blogging at UgoTrade. And there is here--let me actually paste in--I'm just going to paste in this chat here. In the chats are a link to UgoTrade and an interview with Mic Bowman. Mic is emphasizing in this--now this is admittedly from 2008, but a fascinating article on connected visual computing, which is the union of three different domains for applications: the MMOG, Massively Multiplayer Online Game; the Metaverse, which is the traditional universe of Virtual Worlds, like Second Life, which may or may not be game-oriented. And then a word that despite the fact that this is from 2008 may be new to many of you: the Paraverse, P A R A, that's the Paraverse, which is augmented reality. Right? So not in a Virtual World but adding virtual elements to our world. It's interesting, since 2008, there have been so many changes in technology. Really the biggest one, in my view, the biggest one is the rise of the mobile device and, again, my own personal view, I think that the rise of the mobile device has caused traditional immersion in Virtual Worlds a lot of difficulty. It's hard to get immersed in a screen that's three inches by four inches in size. But by the same token, I believe that it has raised the promise of the Paraverse, of bringing virtual elements into the Real World. And, if you haven't seen it, I had a chance to interview Jesse Schell. He's a former Disney imagineer, now associated with Carnegie Mellon University, and he's talked extensively about what is now more broadly being called gamification, bringing game elements into life, whether through advanced technology or not.
  6. 6. I think that he has a wonderful, wonderful video that's on G4, if you look for Jesse Schell, S C H E L L, and he will talk about the gamification of real life, things as simple as for example having a wireless chip in your toothbrush, along with a tiny, little gyroscope, and the toothbrush is going to record how you moved it. Do you brush up and down and side to side and do you do it for as many minutes as you're supposed to. And, if you did, all this will be recorded and sent to a family leaderboard, where everyone can see, you get points for brushing well, and might get awards with the family for being the best, most detailed up-and-down and side-to-side brusher you could imagine. I'll be very interested to hear what the panelists have to say today, but I suspect it's going to be less of the traditional type of Virtual World application that I talked with over two years ago with Robert Gehorsam, which was really creating a Virtual World at the time. They were focusing on things like urban environments in Middle Eastern countries, as if we would ever expect to have military actions there, and having soldiers go door to door with their avatar in an urban setting and do the stuff that they need to do to get familiar with the threats around them and distinguishing threats, non-threats and so on. I suspect that, given the events of the mobile technology that we've seen, even just since then, we'll see more Paraverse applications where, for example, you might give people either just cell phones that would gamify a real environment or, if we want to spend the additional money, providing some sort of heads-up display goggles at people. I am getting the word that we are going live. This is Rob Bloomfield handing you over to Orlando.
  7. 7. CONFERENCE ANNOUNCER: Good morning. Welcome to day three of Gametech. Has it been fun so far? We hope it's fun for you guys because it's a lot of work for us. Thank you for participating. I have the pleasure of introducing our expert panel on Virtual World today. Mr. Doug Thompson, CEO of Remedy Limited, a pioneer in immersive media technology, on your right over there; Mr. Richard Boyd, a chief architect of a Virtual World at Lockheed Martin; and Dr. Mic Bowman, who leads virtual research at Intel. We have a moderator today, our very own Doug Maxwell. He's our resident expert at _____. It's all yours, Doug. Let's give our panel a quick applause. Thank you. DOUGLAS MAXWELL: Thank you, Dr. Wynne for the introduction. My name is Douglas Maxwell. I'm with the [AUDIO GLITCH] STDC based in Orlando. I'd like to thank all of you for coming today at day [AUDIO GLITCH] Gametech. We would like to do things, like create a virtual training environment that is actually tutable, which means that it's reusable. And it's a rich area for R&D, and I really enjoy working with folks who are like the ones that are on our panel because they can help us solve some very difficult problems. Some problems that we're talking about would be realistic numbers of participants. So currently, realistically, you can get maybe platoon-level participation in a virtual environment, and we would like to go all the way up to a village or city level, which means that you need orders of magnitude of more objects and avatars in the scene. We would also like a realistic operational area, which means that, instead of just a few city blocks or very small area of country, we would like multiple kilometers represented in the same place at the same time. And then all of it needs to be tied together with realistic physics.
  8. 8. So I've got five areas of research that I would like to see evolve over the next, say, couple of years and address these specific issues. The first one, like I said before, is high numbers of avatars. We would like to create higher scene fidelity and complexity, to replicate the operational environment. AI: AI is a huge headache for us because there's really no one AI that does everything we need so we're talking about contact switching in your AI so you want an AI to behave differently depending on what the situation is. We would also like to address usability. We're still using a keyboard and mouse today, and we'll talk about that a little bit later. And then lastly physics. Game physics: using game engines for physics are really only appropriate depending on what kind of scenario that you're playing out. And a higher level, higher fidelity, a higher accuracy in your physics is really needed in the next generation of virtual trainers. So with the opening remarks off to the side, I'd like to further introduce our panel. Last year when we were talking about putting panels together, we were brainstorming on what to do, and I popped off and said, "Why don't we do a futures panel," and almost immediately regretted it. [AUDIO GLITCH] technology that, even I'd put together a panel on the spot right there in the conference room, by the time we got here in March things had already evolved. So what we're going to do today is have a discussion about future directions of Virtual Worlds, Virtual World technology and how we can see what we're observing in current trends to be applicable in our current needs. I'd like to introduce first, Mr. Mic Bowman. He is a principal engineer at Intel Labs, and he
  9. 9. leads the Virtual World infrastructure research project. Those of you who are watching in Second Life may know him as the guy who did the thousand avatar in a Sim project. He received his bachelor's degree from the University of Montana, and his master's and Ph.D. in computer science at the University of Arizona. I'd like to turn it over to you, Mic, for some remarks. MIC BOWMAN: It's cool in _____, but it was warmer in Arizona than it is in [Portland?]. So first of all, thank you for coming. I guess, as Doug pointed out, the group that I run in Intel is responsible for kind of technologies that advance the cutting edge of Virtual Worlds applications and environments. The specific research projects that we work on are scalability related. It's how we can bring to bear computing resources to enable their usages so we focus on order of magnitude improvements in scene complexity, in avatar interactions, total numbers of avatars in a scene as well. In addition, Intel funds through academic research projects a large number of sort of funder-future applications in technologies as well, things that we can bring in to these environments over time, as the technologies mature through the ISTC, the Intel Technology Center for Visual Computing and the Intel Visual Computing Institute, both of which are focused on North American and European researchers. We're really excited about the opportunities we have to advance this technology and enable the usages. DOUGLAS MAXWELL: Next, we have Mr. Richard Boyd, of Lockheed Martin. I always enjoy meeting authors, and I didn't put two and two together until it was too late, but actually have your VRML book. I used it.
  10. 10. RICHARD BOYD: Wow! DOUGLAS MAXWELL: Yes. Yes. So we've got someone who's been around Virtual Worlds for quite a while and has a pretty deep understand of our problem-set. Richard is one of the creators of the Lockheed Martin Virtual Worlds Labs, and you joined what, Martin, in 2007, when they purchased your company? Is that correct? RICHARD BOYD: That's correct. DOUGLAS MAXWELL: And, I'd like to turn it over to you. RICHARD BOYD: All right. Thanks. So as I look around the room, I see a lot of folks, the sort of list of usual suspects who have been admiring the problem of how do we take this medium--whether you want to call it Virtual Worlds or computer gaming--and apply it to some of the more pressing, I think, even existential problems that we have as humans trying to survive and thrive in the information age. Those of you who know me know that, as Doug mentioned, I've been working with gaming technology as a medium for essentially 20, 21 years. I won't steal any of David's thunder from the keynote today, but I first met him when he was working on the movie The Abyss, with James Cameron, trying to solve some very basic challenges that all humans face actually when it comes to trying to understand from two-dimensional blueprints and designs and that sort of thing, what an ultimate sort of design is going to look like. Turns out that about 75 percent of us cannot look at 2D
  11. 11. information, like a blueprint or an image and extrapolate in our head, like what is that 3D environment going to look like. So we solved that problem together by addressing the, I think, four areas where this gaming medium is really powerful. The four areas that we focused on are design. Chances are if you bought one of those kind of do-it-yourself 3D home-design tools in the mid '90s, that was technology we licensed to the Learning Company back then, and, of course, I think we all know today that, if you're going to build a 747 or design the chairs that you guys are sitting in, chances are you'd be doing your effort a disservice if you didn't start by doing some design in a 3D CAD. Entertainment: I think we all know at this point that it's a very powerful medium for entertainment. I think computer gaming exceeding movie box-office receipts may be, was it 1997, and has been sort of outstripping it ever since. The last two is where we're really focused with Virtual World Labs at Lockheed Martin, and I know the community here is really focused on it, and that is how do we solve these two major, pressing existential issues that we have in this century, which is, we've got this increasing complexity we're having to deal with. How do we prepare humans to operate in more complex environments. Now something we know from flight simulation is that [AUDIO GLITCH] earlier and maybe getting access to the equipment or the environment is difficult. If you can put them in a simulated environment and let them practice safely, you get a shorter path to mastery. I think we all know that today. RICHARD BOYD: It turns out that, if you apply that medium to a whole host of issues, like running a McDonald's restaurant or long-haul trucking or anesthesiology--I've got some
  12. 12. folks visiting from Cornell Medical this week, that I brought down to see the stuff at Gametech--that we have the same or similar results. Right? And I hope we're not still having that debate. We were having it as recently as three or four years ago at other conferences about should we really be using gaming as a medium, or should we be using simulation in general to try to improve human performance. I think we all should know today that there's a mountain of evidence. The evidence is in now that, if you're not using gaming, you're costing yourself money and time and performance. And, if we get to the question phase and there's still some of you out there, who haven't received that gospel, then I think we're all really excited about having that discussion with you. The last one is interface. I think, in this [XO-flood?] of information that we're all having to face today daily and these systems that humans are creating today, that are increasingly complex, I mean I think it's fair to say, as Joshua Cooper Ramo says in his book, The Age of the Unthinkable--anybody read that book? Please go out and check that book out. I mean when you see stuff, like what's happening in the Middle East right now, and in Japan you see that we humans are creating systems that are tens of thousands of times more complex than our conception of them, and somehow we think that we ought to be able to control them. Whether it's financial systems or energy systems or the internet, we're dealing with this increasing complexity. I believe firmly, although I haven't yet, I think the jury's still out on this--I think we're going to talk some about interface--I think there's a way to harness this really powerful medium, to help humans make better sense of this bewildering world that we're creating for ourselves: How do you separate signal from noise? How do you learn what to pay attention to or what can be safely ignored? I think we ought to be teaching this in K through 12 education. Right? That's going to be a critical
  13. 13. skill for humans in this age. And again, the things I really admire about the medium is that basically, just to put it in a nutshell, you've got every media type ever used before in human communication, right, available to you. So whether it's video or audio or images, text or even animated 3D characters and objects, we have that in this really powerful medium that we're harnessing. So I'm really looking forward to see how we harness it for simulation learning, as well as new interfaces. Now, one of the issues, and I still remember the first time I went into Second Life, and I think that was maybe four years ago. As someone who's been working in gaming--David will talk about this later--but David and I and our partners created Red Storm Entertainment with Tom Clancy. We did Timeline with Michael Crichton. We did this Amazing Trainwreck Company called [Irock?], with Ozzie Osborn, and have focused on all these applications of the game technology over the years, but the issue is how do we take that technology and apply it to some of these more fundamental issues that we have today. I'm sorry. So when I went into Second Life the first time, I found this environment where I met some really interesting people, some pretty kooky people. I played around with the tools and realized that I could create in there and make these interesting worlds, but I quickly ran out of things to do, and it didn't meet my expectations as a gamer, right, so I didn't have the complex interactions with non-player characters. I didn't have Quest. I didn't have physics. I know a lot of those issues are being overcome, but my first reaction, when I tried to figure
  14. 14. out how could we harness it for doing training and that sort of thing was, hey, I really want that superset of capability that I see in MMOG engines, like the HeroEngine and others that I know the DOD were beginning to look at. So that was about three or four years ago when I was talking to General Lessel and other folks about, "Hey, Second Life is great, and I see how we can have really interesting meetings in these environments and have some interesting collaboration, but if I wanted to do mission rehearsal and have thousands of participants and that sort of thing, I really want this sort of superset of capabilities that I see in an MMOG." And then my thinking has evolved from there to say what is wrong with even using an MMOG today and the problem with Second Life. And, again, I think these environments and these platforms are really good for a variety of things like collaboration and socialization and that sort of thing, and I've really enjoyed seeing the development of them. But, as I'm looking at how do you actually get this used in a government environment that requires today too many unnatural acts. And you guys know what I'm talking about, right? So it's go and get this very large proprietary platform client, download it, install it on your system inside of a firewall. Call your sys admin and say, "Hey, we need the following ports opened so that I can be able to interact with other people." And these things are just [AUDIO GLITCH] approved and all that, and these things are just, they're unnatural acts, and not just for government. I think it's a lot to ask for consumers. Now we do it on the game side because the payoff is, you know, I have this entertaining experience that I can have. But that's when we really started thinking about how do we remove all this friction that is keeping us from taking advantage of the medium. That's
  15. 15. really what I wanted to talk about today is, I believe in the medium. I believe it's a powerful way for us to engage with each other and with information. I'd like to do it in a way that has less friction, that has better interfaces that has a better, more smooth pipeline for how I create content. And the ideal model, once you go through that sort of thought experiment, is, I'd like it to be the way the internet is, which is, if I want to go to the Metanomics website, even though I couldn't get the Flash to work on my iPad to see this streaming thing here, I go to that website, and I'm pretty quickly in that experience. I don't have to usually think about going out and getting proprietary plug-ins or opening up ports or that sort of thing. And, if I want to create content for a website today, well, that's really the energy that made the internet take off. If you wanted to learn html, what did you do? How many of you know html out there? And how did you learn initially? You went to a web page. You probably right-mouse clicked it, and you said, "View source." You said, "Oh, that's really cool how they did that," or, "That's a really nice cascading style sheet," and you reused other people's work, and that's what kind of helped--everybody became a publisher because the barrier to publishing really dropped dramatically. Now the barrier to creating a triple A game title today is pretty darn high. Now the barrier to making an iTunes or an iPad or iPhone sort of title is dropping precipitously, thanks to better tools, a nice little ecosystem that Apple has set up. And what you'll hear--and I don't want to steal all of David's thunder from this afternoon--but what you'll hear him talk about is all of those elements, removing all of that friction on both the content-creation side, we like to use the word democratize and commoditize that whole part of it, as well as the distribution of the players, make it very easy for people to get into the worlds and then develop a business ecosystem such that everybody gets to play.
  16. 16. There are no more walled gardens or proprietary platforms that prevent people from participating in the environment. I think if we can finally make that happen, the barrier lowers. We'll all be able to take more advantage of this medium that we all agree is powerful. And I know I'm taking more time than I expected. DOUGLAS MAXWELL: That's quite all right. RICHARD BOYD: But I'm looking forward to the discussion. DOUGLAS MAXWELL: Thank you, Richard. The next panelist is Mr. Doug Thompson, and I'm having a really surreal experience right now because we're having a complete role reversal. The first time I met Doug was in San Francisco, at the Enterprise 2.0 Conference, where he moderated a panel, and I sat down on the end. So you might notice, out in the crowd, that we're also streaming into Second Life. Doug is CEO of Remedy Communications, but he also owns a show called Metanomics, and we're participating there as well. Those of you out in the Metanomics crowd, please feel free to jump in with questions in chat, and I'll try to get to them. Also, Mr. Doug Thompson is known as Dusan Writer in Second Life, and he also has a little company called Startled Cat, which does storytelling. And one component that I think is really missing in some of our virtual trainers is actually a good story to tell, to engage people so that they really get into what we're trying to teach them. So, Doug, I'll turn it over to you.
  17. 17. DOUG THOMPSON: Thank you. I don't want to get into a big modology versus narratology debate, but we started Startled Cat because, just picking up a little bit what you were talking about with James Cameron is, you can have really great technology, but why are you going and what is the reason that you're there. And while there are simulations and there are game mechanics that help drive our participation in Virtual Worlds, we also have a belief that this is a tool for storytelling, that it's incredibly compelling and that storytelling can really bring you to an aha moment in a way sometimes that game mechanics can't. Game mechanics allow us to kind of look under the hood of the way that reality is constructed. It can give us incentives. It can give us pathways through content, and, in conjunction with storytelling or storytelling as a separate element, Virtual Worlds, I think, have an incredibly powerful ability to communicate empathy, walk in somebody else's shoes, communicate emotion, and allow us to think about higher concept, higher-level concepts. And, when you're immersed, the quality of Virtual Worlds that provides that immersion, whether it's high-fidelity immersion or not, it tricks the brain. You think that you're there. You think that you're participating in a Real World. There's no difference. The mind doesn't see difference between virtuality and what's real. Which is another interesting aside because I think that you talk about the future of Virtual Worlds, and I love the phrase "The future is here, it's just not widely distributed." RICHARD BOYD: William Gibbs. DOUG THOMPSON: And I always say what I've learned about the world, I learned because of Virtual Worlds, where the future is headed. And you think about a lot of the
  18. 18. work that we're doing is in public face in Virtual Worlds. And you think about issues around identity, how we express ourselves through avatars. You think about the ability to collaborate globally, across time zones, across geographies in real time. You think about issues that have come up around privacy. Virtual Worlds have become a kind of test bed for thinking about how we, as humans, work with technology and work with the tools of technology. So I think there's some tensions that I think about in the work that we do. We're doing work, for example, in developing a virtual environment for military amputees. Military amputee goes back home after they've been in a military treatment facility. They lose the support network, the face-to-face support network that they got while they were at a place like Walter Reed. So by providing them with a virtual environment, they can stay connected to their peers, provide each other support and, through an avatar, work out issues, for example, about body identity and maybe, through a virtual environment, set up a sort of mentoring peer-to-peer support network that maybe is not as easily facilitated through things like Facebook. So those are the types of projects that we're working on, where it's not just what happens in the virtual environment, but it's the implications of the virtual environment to the physical or actual world. And these create, I think, some tensions, and you touched on some of these tensions, one of those tensions being around fidelity because I think that there's a spectrum between extremely high-fidelity virtual environments and environments that don't need to necessarily have that level of fidelity. And I think it's a false dichotomy to say that we need to have increasing levels of fidelity because I'm not sure that's true. We were doing a project over the last couple of weeks where we're actually prototyping a retail
  19. 19. experience for a Fortune 500 company. We brought some people in, and we did some storytelling in a virtual environment, and we had a very high-fidelity environment, but really, at the end of the day, all they needed was a table. That's all they really needed was a virtual table because they got into a story outside of their normal way of thinking, and what we discovered from that or the reminder for us was that it doesn't always take--I'm going to be a little bit of a, what's the term for it-- RICHARD BOYD: Devil's advocate? DOUG THOMPSON: --devil's advocate or take another side of this, which is that fidelity isn't always a necessary end goal. I think that other tension between narratology and modology or game mechanics and social/storytelling experiences, I think there's some interesting tensions to explore there. Finally, I think one of the things--the future is here, but we haven't quite grappled with what it means, and that's around avatar expression. I think we're starting to see technology such as facial recognition, where your avatar can respond based on your physical face, but also other ways of expressing yourself through the identity of an avatar. I think it's interesting to think about our avatars as starting to embed intelligence of their own and that when we think about artificial intelligence, we often think of NPCs and bots. I think there's a really interesting work that we could do if we think about what type of intelligence we could embed in our avatars so that our avatars could be sort of partially on, that we could be expressing ourselves in digital spaces, even when we're not necessarily there. I think this happens on its own right now, and we don't acknowledge that this
  20. 20. happens. You're logged into Second Life. And I see a little IM there from somebody. Your avatar is there, and people are chatting with your avatar, even though your level of presence isn't necessarily fully there. I think this idea that we can start to embed intelligence and procedures and protocols into our avatars becomes a very interesting place for AI development. And I'll leave it at that for now. DOUGLAS MAXWELL: The fidelity question is coming up over and over again during the conference, and I believe tutable fidelity in our Sims is important. I think the mission in the scenario should define what level of fidelity that you should have in it. And also the kinds of fidelity should be clearly defined as well. Fidelity is expensive and, if you did high fidelity for everything all the time, it probably wouldn't be viable. I want to switch gears a second and ask the panel a set of questions, and, once we're finished, I would like to invite both in-world participants and the audience to jump in and give us some questions. So the first question that I have for you guys has to deal with decreased budgets. So we're anticipating a drawdown in spending in the Military and pretty much across the board in the government. My first question is: How do you see the reduction in budgets having an impact on how Virtual Worlds are used specifically as a collaboration medium? And, Mic, I'll turn that over to you first. MIC BOWMAN: [AUDIO GLITCH] clearly in a meeting that's held in some kind of virtual environment's going to cost an awful lot less than holding them in the corresponding
  21. 21. physical space, for travel costs, for physical space issues as well. So there are two parts of my thinking about this. One is changing of budgets, change the acceptable level of expense to value ratio for virtual environments. So as we have lower and lower budgets, our willingness to get the very deeply interactive environment that we get when we're physically co-located is less important, and we found this on other kinds of interactions. The second part of that though is--and this goes back to a degree to the fidelity question--is what does it take to make these applications more useful, that is, what can we do in order to get the benefits out of it. One obvious observation, and Doug and I have had this conversation a number of times, is that if you want to host a thousand people in a conference, you better have a virtual space to get those thousand people in the conference. There's certain levels of basic scalability that we need in order to be able to translate things from one place to another place. On things like fidelity, what do we have from our existing conferencing applications? We have a certain amount of interaction that we can get out of NetMeeting. It's a really _____ interaction. But there are pieces of the expression of social interactions in those meetings that's completely lost when you use something like NetMeeting. You can go into an environment, like Second Life, and it becomes more expressive. There's the notion of presence and engagement and immersion into it. But, again, the number of dimensions that we can express, the amount of control that we can express in those is relatively low. So what we accept is in those environments is essentially a translation of our NetMeeting experience with the avatar presence. Right? There's very little else that goes into it. There's a little bit of sort of pre-meeting and post-meeting that you don't get out of
  22. 22. NetMeeting, but your ability to express things is very small. It's I can't--when I'm in a physical meeting, I'm notorious for this. It's the mean time to whiteboard what they talk about, for me, that I have to have a whiteboard marker in my hand, scribbling on a board at a meeting. And I can't do that in any of these virtual environments. Until I can get the fidelity of my expressiveness up to a level for this particular application, where I can spontaneously start writing on the whiteboard and have the same kinds of sort of deeply expressive interactions that I would get in a physical environment, it can't fully take the place of our physical meetings. And so we go back to what's the level of acceptable expression? What's the level of acceptable behavior? One other observation that I'll make on this, Richard talks a lot about games, and that's his way of capturing the world on this thing, and that is by far the most mature and effective application of these technologies right now. Games, [as turning in?] games of simulation, are the most successful use of these as serious applications. The way we talk about it inside Intel is very much about applications. It's not my goal to create a virtual space. My goal is to solve the collaboration problem. And the virtual space needs to be able to express the things I need to be able to express in order to achieve that application. So the application is share or communicate information, I think we're going to see more trends trending to that. If the application is collaborate and create and design, I think we've got a ways to go on the technology, to be able to get to the point where we can express that. DOUGLAS MAXWELL: Nice. Richard, your comments?
  23. 23. RICHARD BOYD: Sure. I mean obviously affordability is on all of our minds, not just within the DOD, but in society to a larger extent. And that is one of the primary drivers behind when you heard [D9?] yesterday talk about the virtual framework and trying to sort of break down all the walled garden sorts of siloed efforts that we see across the DOD and around the world, that's one piece of it. And that's precisely why we're trying to again create an ecosystem approach where we get rid of the walled gardens, proprietary platforms, allow everybody to participate and allow that sort of, finally, the share ability and portability of content so that we're not creating content for one platform and then have it not be portable to others. The company that I had, 3Dsolve, that was acquired by Lockheed Martin, was one of the, I think, six or seven software companies that were contributing to the America's Army game project. I think that some fabulous work was done there. The one regret that I had from that whole process was that a lot of that great content that we created there is sort of tied up in that platform, and there's a lot of people that are really interested in trying to take some of that content in the fabulous training that was created in there and move it over to more portable environments or to other platforms, bring it maybe, "Could you bring it into Second Life? Could you bring it into Teleplace? Could you bring it into VBS2?" And the answer has been a resounding no, for a variety of reasons, some of them based on policy and people and some of them really just based on the technology. So those are the kinds of things we want to break down, that we believe will reduce costs in just the content creation and sharing, finally, [AUDIO GLITCH] work with BP on using Virtual Worlds and advise them on using a bunch of different platforms. And naturally they
  24. 24. replaced a couple of their meetings where they've got folks in Azerbaijan and in South America and everywhere, and every year they were bringing them all to London. And, of course, the individuals were really happy about that annual meeting, right, to get out of Azerbaijan or the 'stans or wherever they happened to be and get to London for these face-to-face meetings, with a lot of going to dinner and social interaction. So they did this test where they replaced it with a meeting, I believe it was in ProtoSphere, and the savings were dramatic, as you pointed out. Now, one of the things we learned from that though, and this is something that I feel, I also agree that we have a ways to go, in terms of really recreating the social aspects that we're all enjoying here in Virtual Worlds because I believe that there is an incremental benefit to going into these environments. Now if you have a lot of experience, of course, and you've played World of Warcraft, and you've been in games, and you have Second Life experience, you're going to get a lot more out of the meetings in these environments, like Teleplace or Second Life, than people that are showing up for the first time, simply because you're more adept at how do you share content, how do I collaborate with others. I'm not running around jumping up on the table and learning how to use my avatar for the first time. Within Lockheed Martin, we've been, again, exploring this technology for quite some time, and we've had a study of I think we had a thousand Teleplace seats. Is that right, Remy? REMY: Yes, you did.
  25. 25. RICHARD BOYD: And I've used it for program management and just general meetings and that sort of thing. We've watched people go up that initial learning curve, where they're again not really aware of the etiquette and protocol that we all have here. You guys aren't running around jumping on the tables. It would be fun, kind of, if you did, but--we'll do that experiment later. But you're sitting in your seats, and I'm speaking, and everybody is listening, to some extent. Right? In these Virtual Worlds, sometimes you got people just running around everywhere, not paying attention to the speaker, and I noticed yesterday in Remy's thing in Teleplace, he had this great little tool--because that's one great thing about the medium is, we do have godlike control of everything in these worlds. We've created them. We've summoned them into existence. And Remy was able to grab all of us--there were 15 of us, right--and slam us down in our seats and then make us follow him around as he showed us stuff. And we'd lost control of our avatars. I wish I could do that in Real World meetings so that's one benefit. But I think, in terms of collaboration right now, the benefit is incremental over things like NetMeeting and others, and I'm still really interested in seeing us get rid of some of that interface friction. And, again, I know we're going to talk about that shortly, and that sort of thing. Now, when it comes to training though, as I said earlier, the improvements and effectiveness and efficiency that you get from being able to have people collaborate in these environments, whether you're training on the Littoral combat ship where there's folks on a real ship out somewhere training right now, and the rest of us can be at our stations in a virtual sort of analog of that ship and participate in those live exercises with them, that
  26. 26. is really powerful stuff. And it's got a demonstrable effect on creating a shorter path to mastery. And, after all, that is what, as you pointed out, that's what we want. I don't want to learn how to use my iPad better. I've never learned how to type. I'm really glad that I never did because soon I'm not going to have to. I'm not going to even need it because we're removing all of those barriers, and soon it's going to be this frictionless way for us to, again, do our jobs, which is collaborate with each other and information and accomplish our task and, hopefully, with a shorter path to mastery, which is, again, there's still organizations that are not taking advantage of this, and I believe that very soon those organizations that don't will be completely outstripped by those who do. And individuals who take advantage of this in the next ten to fifteen years will appear super-human to those who are not taking advantage of it. So it's all about easier interfaces, more commoditized and democratized sorts of ways to develop the contents, and then taking advantage of this medium. Game mechanics are just one piece of it. The point I made earlier is, it's the fact that we have all these media at our disposal, and, to me, it's actually--this is what I saw on the set of Avatar with James Cameron is this godlike power over a medium where you can do a lot more even than you can possibly with the movies, with a passive linear medium. Essentially, this is a nonlinear interactive medium where you can get emergent behavior and be surprised and that sort of thing, and I think it's incredibly powerful, and I'm looking forward to seeing where it goes. There are ways to save money, certainly on meetings. Then on training. I think that's going to be a big game-changer in the next decade if D9's vision is realized.
  27. 27. DOUGLAS MAXWELL: Doug, what do you think? DOUG THOMPSON: I mean I guess just a couple things. I mean I like it when times are when you're scrambling around for budget because it makes you think that it's not Virtual World budgets that [AUDIO GLITCH] just budgets in general. So people are saying, "I've got to figure out ways to do more with less." And I actually think these guys have pointed out some great use cases for Virtual Worlds, where you can do more with less. But also kind of make two other points. One is that we don't typically talk about Virtual Worlds in isolation. They're part of a larger project so there might be a social media component. There might be a web component. You might be taking machinima in order to create a training video, and that that machinima gets posted to an internet or out to YouTube. You look at this Metanomics event, we're investing an avatar presence in Second Life, but there are also people watching from the web so you are in a Virtual World, and then we may probably have a couple of hundred people watching on our stream. And, if you can't watch on our stream, you're probably participating in Twitter. And so when we look at investments in virtual technology, we look at that investment as part of a larger set. And back to what Mic was saying, it's a solution. If somebody has a problem and you can give them a solution and one of the technologies happens to be Virtual Worlds, they'll pay for the solution. The other thing I think that this encourages is interesting collaborations and looking at unique ways to bring partners together that might not have been brought together before. I think about the work we're doing with military amputees, just as an example. There's all
  28. 28. kinds of funding models that could work with that, including funding models through, for example, the VA where you're looking at actual rehabilitation, and you're looking at medical outcomes. But, you can think about other types of partnerships that you can bring together into an environment like that. If there was one thing I would encourage and one thing that we've learned in Virtual Worlds is that these abilities to collaborate across disciplines and with partners that you might not have thought about collaborating with can unleash value that maybe you hadn't anticipated. DOUGLAS MAXWELL: I agree. Thank you. Let's talk about security for a moment. One of the largest barriers to adoption in the Military, in particular, and across the federal government is security. We currently have a zero-risk tolerance policy, if you will. Steven Aguilar, up here in the front, and I lived that for a couple years, just trying to get the basics in place at the Naval Undersea Warfare Center for us to do demonstration and prototype work. I want to ask you guys what you think about new approaches to security and how we can possibly either drive new policy that D9 talked about acceptable risk yesterday? What are your thoughts? Richard, let's start with you. RICHARD BOYD: Sure. Obviously since the Virtual World framework tasking came from D9's office, I think Security, with a capital S, was right behind affordability or sometimes they swap places, in terms of how important they are in our tasks. So that's something that I think obviously is a must-happen, and it's something that, of course, has held up the adoption of some of these technologies in the DOD and in the government at large. So our goal with this effort to democratize and commoditize and reduce friction with the framework is to make sure that anything that someone creates in this new framework or
  29. 29. deploys can be made as secure as any website today or any web deployment today. And that, again, is one of the things we're going to be harnessing from what we've learned from the internet is making sure that that is the case. Obviously, saying that it's as secure as any other internet deployment means that it is insecure, but we routinely deploy content today within firewalls, within secure networks. And, again the way we're trying to approach this platform is to say it's just like any other medium. It's like any other media type. That's all it is. I think the public and the media really talk about gaming and Virtual Worlds, and we play it up quite a bit, but I really think of it again as just another medium that happens to contain all other media types and, therefore, allows new forms of storytelling, new forms of expression, new forms of collaboration, both with other people and with information. New ways of looking at information that we're only beginning to harness. But the idea is, by using secure socket layers and everything we've learned from the internet, anything we deploy on this environment should be as secure as any other web deployment. DOUGLAS MAXWELL: Doug, what are your thoughts? DOUG THOMPSON: Yeah. Two things. I think there's two pressures. One is that enterprise and military are driving an agenda for security so I think you're seeing hacks, and I think the kind of work that you guys are doing to push that agenda will continue so we've seen hacks. But on the other side of it, I think that the push for more ubiquitous access, whether it's through a browser, whether it's using things, the Unity plug-in is still a plug-in. But, you're sort of seeing two pressures that are leading us to a place where you
  30. 30. will be able to say that you could start firewalling things, and those two pressure are, one, we want [AUDIO GLITCH] storytelling, [AUDIO GLITCH] you do need secure environments. DOUGLAS MAXWELL: Mic, you guys are looking at it at a completely different angle. MIC BOWMAN: Yeah. DOUGLAS MAXWELL: So I'm not going to steal your thunder. Go for it. MIC BOWMAN: Okay. The question of security in a Virtual World is by itself the question is a really interesting problem and trying to understand what we mean by that. It happens on many different levels. There's the basic level. This is a network application, and we can treat it like a network application and try to secure it like a network application. But, even when we do that, it's not quite the same. Right? There are a number of different fairly unique kinds of attacks that you can have in these environments. And a web application, a user that comes to that http server brings very little context, and the context that that they bring tends to be very static. Whereas, an avatar that comes into a simulation, into a Virtual World that's being simulated by a space, brings a significant amount of context with them. They bring their avatar, and they bring all of their inventory, call it whatever you want, with them. And that inventory frequently includes things that require computational resources on the server. And so every avatar that comes into your server brings with them this context, and the context is potentially destructive to the server.
  31. 31. And, just to give you a very creative example: There's some wonderful DDoS attacks about people with very complex avatars walking by a room outside. You can't even see them, but the simulation engine, unless the simulation engine is fairly smart, it's still sending updates down the network for all of the changes that are happening to that avatar. And so network research is being consumed on my connection to the server, by somebody else who's on the server. That doesn't exist in our web applications today. We need to rethink that. But it gets even more interesting, right? That, in order to do the optimization, we do things like Dead Reckoning and animations and other things, where we take sort of server-size simulation concepts and push them off onto the client, in order to get better behavior and better application and use of the resources. Well, what that really means is that when I'm visiting a site, somebody who walks through is actually consuming not just resources on the server where the simulation's occurring, where the virtual environment or game happens, but also on my client because they're consuming the resources as I render their avatar and as I do more and more of these optimizations where I'm moving my computations to the client, they're using more and more of my resources. And, if they're not starting to execute things on my client, now that opens up a whole 'nother spectrum of kinds of attacks that can occur. There are a variety of levels that Intel's working on: different features and capabilities that allow us to provide layers of security, secure execution environments, all the way up through support of programming languages allow us to firewall applications for every [bot?]. So there's a whole stack of solutions that have to come in there. There's a second
  32. 32. part of the security problem. So the first part is just how do we take this network application and make it secure. The second part is the information [on conflict?] that's in there. So this whole notion of persistence and shared state that makes this such a powerful media is also something that introduces a set of complexities into the problem. That is, the information that's in that shared state may not be consumable by everyone who has access to it. And so now we have a problem of presenting the shared state in a way that different perspectives and different viewpoints can take advantage of the sharing, but also see only the stuff that they're supposed to see in those environments. That kind of very compartmentalized sharing of information is very much a military kind of application. There are certainly applications in enterprise, but they're less open. But we see similar kinds of things in just general applications for consumers. Content creation and places like Second Life goes through a number of these problems of piracy, picking things up and moving it around, and that whole problem of ownership and moving content is yet another dimension to this. And, again, there are solutions that can be brought to bear on it. The question is how much weight do you want in the security system versus how much freedom to you want in order to act and the cost of it as well. DOUGLAS MAXWELL: We have about ten minutes, and I'd like to open up questions to the audience before I move on to the other questions. Mic, you brought up a great point though. Back in 2008, when we were filling out the Security paperwork to allow Second Life in our networks, it was all about web pages, and they wanted to know, "What web
  33. 33. page would you like us to view?" And so you know the policy is lagging, and we need to educate our policymakers to help us with that. Yes, ma'am? AUDIENCE: Thank you. Can you hear me okay? DOUGLAS MAXWELL: We can. AUDIENCE: Richard, you opened up with something I really, truly believe in, that today we have really complex system and problems that really require collaboration to be able to solve. Mic, you mentioned the goal to collaborate in design around those problems [AUDIO GLITCH] have for you is: Why not take the approach of creating contextual environments that better showcase what the system problem is? For example, are you looking at the world of games? All the games that help explain complex systems the best, the Sim City, Civilization, Starcraft, they explain supply-chain optimization problems. These aren't in the style of FPS, they're more real-time strategy, the god view. So why avatars? Why not instead, for example, Foldit, the protein-folding game out of Washington? A corroborative, contextual environment that showcases better the problem versus avatars. I'm just curious. I'm pushing back a bit. MIC BOWMAN: Okay. I'll go. DOUGLAS MAXWELL: Okay, go ahead. MIC BOWMAN: [First one's easy to do?] I have two responses. The question of whether
  34. 34. an avatar's a part of it or not is an independent question from what kind of problem am I solving. Am I looking at the right set of problems? So there's a social application, a social need that Second Life solves, that's different from a protein-folding visualization. So we created sites and largely as in collaboration with the IEEE and ACP around the Super Computing Conference, largely as a way to explore what we call collaborative visualization, a way of looking at very complex--looking at and interacting with very complex datasets. And so we have a guy, for example, Aaron Duffy at Utah State, who uses OpenSim essentially as an interface to configure and interact with his biological evolution systems. So he can do these very complex simulations on the backend, and he's using this as simply a front end for seeing the evolution of the populations in a very nice, graphical way. It also happens to be the case that we can join him and provide feedback, and so it becomes a very collaborative environment. A second example of that is, we're doing some collaboration with Sandia National Labs, around this thing we call water wars. And Sandia has a very rich set of simulations for hydrology in the Rio Grande Valley in New Mexico. They have a very difficult time communicating the results of that to their constituencies. And so we're using essentially a game front end that's being backed by a very rich, complex simulation environment, in order to communicate the long-term results of decisions. And so you play it as a game, but you're really configuring and interacting with a very rich simulation on the backend. And it's those kinds of applications, it's that conversion from sort of thinking about it as a game to thinking about as an application that has game properties to it, that we're trying to explore in those applications. So yes, the apps that I think more about are protein-folding
  35. 35. and serious games and less about sort of social interactions. RICHARD BOYD: Yeah. Among my collective of misfits and who have been looking at these problems, we've been talking a lot recently about things like Katrina-like disasters or, of course, what's happening in Japan and looking at social media. Reid Hoffman invested in my last company and was on the advisory board, and he'd explained social media to us, going all the way back to 2003, and, frankly, I saw the Moore's Law and Metcalfe's Law trends, but didn't know that we'd want to use it just as a mirror, once we had all that great technology. I think there's another step that I think we're about ready to take, which is, again, think about everything we just talked about with the medium, think about the power of social media, and what if we could have an analog model of the world that where, with the right filters, I could coordinate NGO response to what's happening in Japan or what happened in Haiti or elsewhere. And we're seeing a lot of examples of that. I just saw recently that Harrison Ford has a new effort with Facebook, called Ecotopia. And going back to what you said, it is a massively multiplayer sort of game environment, but it is going to be a very simple interface, just like Farmville and some of these other efforts. But it's about coordinating people and energy and collaborating on a problem and taking advantage of this medium to get people to work together. When we first started working on VRML in the early '90s, with Mark Pesce and Tony Parisi, and when David and I wrote the book on that in '96, the idea that we had originally was an analog model of the world, the idea--and Michael Jones was involved in those discussions as well. The guy is now CTO at Google Earth, of course. But the idea is, make an analog version of the world, that's synthetic, that we can play with, and I know, Feydra(?), you talk about this a lot, about
  36. 36. using the medium to map the future. And it turns out to be really, really good at that because you can do what-ifs, and you can say, "What if this changed or that changed, how would we deal with that problem? Let's run through a simulation of it before it actually happens and see how we would coordinate and respond to it." But I think that's a perfect application of the medium. DOUGLAS MAXWELL: Very good. Doug, would you like to weigh in? DOUG THOMPSON: Well, just to say I think it's interesting you asked the question, and the responses, because I think it's showing that the boundaries between, for example, serious games and Virtual Worlds are blurring. And so we come back to our main point, which is, what's the problem and then find the right solution for the problem. But I think you're bringing up an interesting question, which is a couple things. One is that I'm not sure yet that [AUDIO GLITCH] as firmly worked out as they will be a couple of years from now. I also think that we'll start to see more interest in work around 3D environments themselves, destructibility, entropy, environments which change on their own without management. And so as 3D environments become intelligent, they can actually become kind of like data landscapes. And then this question about how much presence--do you need an avatar, so you end up with a spectrum of potential solutions depending what your problem is. DOUGLAS MAXWELL: I have one last rapid-fire question for you. We only have a couple of minutes. But when are we going to ditch the keyboard and the mouse? Doug, you go first.
  37. 37. DOUG THOMPSON: Well, as I said, the future is here, it's just not widely distributed. I think we're [staying with?] Microsoft can act the first because it's a consumer application and because people are hacking it and using it as an interface for things like Second Life. We're on the road. Whether people always want to be moving around as they're accessing data, we'll see. DOUGLAS MAXWELL: Thank you. Mic? MIC BOWMAN: You want rapid-fire answers. DOUGLAS MAXWELL: Oh, yeah. MIC BOWMAN: Very tough. I guess I'll focus on one part of it, and it's just the observation that, when we look at system architecture today, the amount of computation of power that goes into output is grossly out of balance with the amount of computation that goes to processing input. And until those line up, until we think as much about how we throw computation at the input problem, activity inferencing, managing the very, very rich set of sensors that we now have in our devices, how we interact with mobile devices and the many devices we have available to us. Until we throw the computation and understand how to throw the computation at managing that large amount of input that's largely ignored right now, we're going to have a hard time getting away from the keyboard and mouse. DOUGLAS MAXWELL: Thirty second, Richard.
  38. 38. RICHARD BOYD: Well, I think the best interfaces are invisible interfaces, and that's the trend that we're heading towards right now, which is ambient, fully ambient interfaces. When we first heard about Microsoft Natal, David and I went out to meet with Alex Kipman, the inventor, and it turned out Jaron Lanier was there as well. And Jaron, of course, is the guy who invented the Data Glove years ago, and we spent a pretty rich day talking about how the interface is going to disappear. But this whole connect thing is really about a sensor revolution that's happening. Of course, at Lockheed, we know a little something about sensors. They're getting incredibly cheap, and this era of us learning how to adapt to the devices, I think, is rapidly coming to a close, and it's going to be all about--and I think the companies are going to win, like the Intel's and whoever, are those who are going to figure out how to make this stuff adapt to us and make it a lot more natural. We're using natural-language processing and gesture. Another one of my colleagues, Frank Bozeman, coined the term--you guys have heard of digital immigrants and digital natives and all that stuff from Art Prensky--he coined the term gestural natives. And that's like my five-year-old today, I often tell the story of when she was three, from the time she was three, she already had a year or more of experience working with an iPod Touch and that interface. I caught her one day out swiping her hand across the front of my television. I asked her what she's doing, and she said--she speaks Italian--so she said, "Daddy, [e rota?]." It's broken. It's like, "Oh, you're trying to change the channel." And I explained to her here's the three or four things that I had in my den, that I use this one for that device. And she just looked at me, puzzled. And I said, "Of course, you're right." Why don't these things just do what we ask them to do? Why do I have to figure out the interfaces? And I think that age is ending soon.
  39. 39. DOUGLAS MAXWELL: That's a fantastic close. Thank you. Thank you to the panel. I appreciate your participation, and I'm going to turn it back over to Dr. Wynne. DR. WYNNE: Thank you. On behalf of Gametech _____, I'd like to present the panel with the Team Orlando coin for your presentation here. RICHARD BOYD: Do we have to share it? DR. WYNNE: It's good for I think a beer somewhere around here. I'm not sure. Could you please fill out the survey and drop it at the door. And do we have a way for the Virtual World to fill out the survey? We'll capture the chats. Okay. Thank you for coming. Document: cor1100.doc Transcribed by: http://www.hiredhand.com

×