May 9, 2012
Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.
Within the next five years, your living room may become a real-world Super Mario Brothers level, your furniture may become the terrain for the next Starcraft, and your front yard may become the sidelines of a Madden football field. As augmented reality technology improves, gaming as we know it will move from the screen to the tabletop to the living room to the entire building—and eventually out into the world.
Augmented reality is essentially the idea of rendering computer-generated media on top of our view of the world around us, creating the illusion that media has been pushed into the world and merged with it. This technology can completely change the way we see, hear, touch (and perhaps eventually smell and taste) the world around us.
To create this illusion, pieces of computer-generated media must be “spatially registered,” or appear to be placed in 3D within the physical world. To place the media in 3D, the computer must “track” physical objects, continually monitoring their locations relative to the display. Until recently, it was only possible to track simple square black and white AR markers, and create 3D graphics on top of those markers. Now it’s possible for a mobile phone to track unique real-life objects like a dollar bill or a 2D image on a page or game board.
The next step in the evolution of AR gaming will arrive when the technology can recognize and track collections of more complex objects, allowing us to develop games that put AR media in our home or work environments.
The Xbox Kinect has given us a glimpse of what will be possible when the computer can understand the 3D structure of the world. When similar technology becomes available on our mobile phones, we’ll begin to see the true power of 3D AR games. Rather than being limited to playing on a TV, we’ll be able to move around an entire room or house, using our phones as the lens through which we see into complex game worlds. That armchair in the corner will become a fire-breathing prickly plant, Mario will jump from a bookshelf to the top of your doorframe, and the painting on your wall will suddenly fill with gold coins.
To achieve this kind of future AR gaming experience, the technology will need to achieve two things:
1. Tracking over wide areas: AR apps are built to “track,” or recognize certain objects like the aforementioned dollar bill or 2D image, and to register where the phone’s camera is in relation to those objects. The more objects and the larger areas that are tracked, the sooner we’ll be able to move AR away from small, controlled experiences, and out into the real world.
2. Making sense of the world: Once we can track enough objects in the real world, future games will be able to build experiences on the unique characteristics of the real-life physical environment around you. True AR game levels will not be the same for everyone, but will take advantage of the unique features of your surroundings. Each piece of furniture will become a new obstacle in the level. The room’s colors, textures, and knick-knacks will become integral to the game. Shelves will become vantage points for snipers, or challenging platforms for Mario to reach.
And as we move further into real world gameplay, iconic places will begin to host carefully authored experiences. Consider Times Square, for example. Anyone who’s been there knows the Jumbotron, the TKTS stairs, and its iconic billboards. Imagine if someone spent the time to build a game for this unique place, which you would bring to life with your smartphone. Would the Zombie King be standing in the center of the square, on top of the TKTS booth? Would the frame of the Jumbotron become a portal through which aliens invade? Could groups of tourists, waiting for bargain theater tickets, defend New York from the Apocalypse?
By putting the game content out into the world, we can create new kinds of play experiences that are qualitatively different than experiences made possible by consoles or non-AR handhelds.
We are beginning to see evidence that if content is put “in” the world, even on tabletop game board, players will begin to leverage their bodies more and more. They’ll use movement and physical ability more directly than they would if they were just looking at a flat panel display--even if they’re playing a console game with a handheld or full-body motion controller.
Even more exciting is the hope that, by having multiple players literally step “into” the video game world together, we’ll create new kinds of social experiences. Players will literally face each other and interact both virtually and physically.
As more powerful mobile processors, GPUs, new kinds of sensors, higher speed networks, and eventually immersive, see-through, head-worn displays develop, AR technology will follow suit. Take for example software like Qualcomm's own Vuforia technology, which my students and I use to develop AR games like NerdHerder at Georgia Tech. With that technology integrated into the Unity3D game engine, we're able to target dozens of different smartphones on iOS and Android. As we continue to integrate technologies like Vuforia into our Argon AR-enabled web browser, it'll be even easier to create and distribute a wide variety of applications across a diverse collection of platforms. Add new form factors into the mix and we’ll be able to create game worlds that not only entertain, but also can serve as platforms for new kinds of education, tourism, and business applications.
But first, we’ll get to jump on the furniture with Mario.
This article is commissioned by Qualcomm Incorporated. The views expressed are the author's own.