OnQ Blog

Augmented reality will (eventually) reinvent how we see the world

Feb 10, 2015

Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.

Kyle Monson is a tech journalist and co-founder of Knock Twice. The views expressed are the author's own.

Science fiction has long dazzled us with visions of what meeting someone new, investigating a crime scene, or walking into a store will be like in the future. We see computerized overlays streaming information right in front of our eyes—bios, rap sheets, directions, even coupons appear as if from nowhere. 

In the last few years, wearable displays have begun to give us a glimpse of that future, but for the most part the picture hasn’t measured up to the fictionalized version. The problem is that these wearable displays and augmented reality (AR) are supposed to, well, augment our reality. But instead of enhancing the world around us, these technologies tend to yank us out of it. 

Tiny screens, limited functionality, and painfully small fields of view force users to focus their attention on a small readout. The reason for that is simple: In the real world, your eyes are constantly focusing at several distances at once (for instance, anyone with good vision can see a tree and street sign ahead as clearly as the person walking in front of them). AR displays put an image extremely close to your field of view, screwing up your normal focal range.

But a few tech companies are working to create the seamless AR experience we see on movie screens. Advances in display technology will soon make overlays easier to read, while improved software and more-powerful processing will allow for AR experiences that are more realistic and let users share this enhanced world with one another.

Today’s crop of wearable AR devices, which includes Google Glass, the Vuzix M100 smart glasses, and a handful of other similar offerings, all work more or less the same way: A small screen attached to a pair of glasses or a headband hovers in front of the wearer’s eye. This screen, powered by an onboard computer and/or paired with a smartphone or tablet, creates the illusion of a sign a few feet ahead. Alerts, calls, messages, and other simple information float at a readable distance. 

Granted, this type of small, focused display does have its uses. Philips, for instance, has worked with doctors to develop apps that allow them to check patient vital signs during surgery without looking away from the operating table. The Strava app helps cyclists to keep their eyes on the road by converting their vitals into a head-up display. And Google recently acquired Word Lens, an app that translates text spied through Glass’s camera, to be part of the Google Translate team. 

But despite having true utility in the right circumstances, AR devices have yet to find a place in everyday life. Why? Because they haven’t successfully broken down the wall that technology can put between you and the people immediately around you. The AR of the future will. 

The first key to that change will be an upgrade in display technology.  AR screens will evolve from small “glanceable” displays to ones capable of overlaying information on your entire field of view, while new apps will allow that content to interact seamlessly with the real world. That way you’ll be able to focus on the AR image and where you’re going simultaneously. 

At CES this year, for instance, Optinvent announced its ORA-X smart glasses, a headphone and head-up display combination that begins to solve the field-of-view problem. The display is attached to a hinge, allowing users to pivot it directly in front of their eye and transition into full AR mode. And this month, Microsoft unveiled its HoloLens platform, a prototype AR system, which transforms real-world surroundings into complete digital experiences, such as Minecraft, Mars, or holographic Skype sessions. 

At the same time, Washington-based company Innovega is taking even deeper control of what users see. Its iOptik system consists of a pair of glasses with transparent RGB displays and light-filtering contact lenses. As AR images are projected onto the glasses, a series of three filters on the contact lenses allow only certain wavelengths of light through at certain angles, which snaps images and text into focus without blurring normal vision. 

But Magic Leap, an AR startup that recently secured a $542 million investment from Google, Qualcomm, and others, has shown signs of being the furthest along in this trend—not only because of how it displays AR overlays, but because of how it might create real group experiences in 3D space.  

It’s early days for Magic Leap, but reports have surfaced how the company’s displays might likely work. Rather than producing an image outside the eye for the wearer to see—or for a system like iOptik to refocus—Magic Leap would use a small fiber-optic projector to deliver an image directly to the retina. At the same time, patent applications hint at advanced head-and-eye-tracking and light-blocking technologies that point to an ability to create convincing 3D objects that move with the wearer’s gaze. Think of it like a more granular version of how virtual-reality displays like the Oculus Rift refresh their images. 

What makes such minute image processing particularly interesting is how it might lead to multi-user AR experiences. If Magic Leap’s system—which, according to The New York Times, is currently in very early stages and is still something of a behemoth—is robust enough to map 3D overlays, objects, games, and even people to one person’s eye, then a group of networked Leaps might be able to recreate the experience from multiple vantage points (that is, from multiple users’ points of view) at once. 

Though it could be years before we see what Magic Leap actually has up its sleeve, it, and technology like it, might finally fix our current AR problems. Advanced systems and software, will create more natural looking images, devices will become smaller and less obtrusive, and, most importantly, augmented reality will be a shared reality for the first time.