OnQ Blog

Robot evolution from the lab to the living room

Jun 5, 2015

Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.

Carla Diana is a product designer and creative consultant focused on bringing objects to life electronically. She is a Lecturer in the Integrated Product Design Program at the University of Pennsylvania and a Fellow at the innovation design firm Smart Design. The views expressed are the author’s own, and do not necessarily represent the views of Qualcomm.

In the mid ‘80s, Rodney Brooks, eventual founder of iRobot, joined the faculty of MIT and set out to create some of the world’s first truly sophisticated autonomous robots. These machines would be built with the capacity to sense the environment around them and respond appropriately.

He began by looking at the least-evolved ambulatory creatures: insects. His first robots scrambled, scurried, and shuffled, using crude sensing techniques to learn about obstacles and critical conditions in their environments. It was a humble start to what would be a long journey, and it made sense to start by mimicking the motions of insects.

But around 1993, plans changed. Brooks’s ambitions took the giant leap from insects to humanoids—walking, talking entities that resemble us—a shift that made his challenge spectacularly more complex overnight. To many experts, the reasons for pursuing such a herculean task were questionable. Aside from a dreamy rush to realize seductive visions from science-fiction films, what was the point of making humanoids? After all, this was a serious research institution, not the fickle landscape of Hollywood.

The value of recreating humans in machine form is one that is still very much under debate. Hubert Dreyfus, UC Berkeley’s expert on the philosophy of artificial intelligence, suggested that “good AI is opportunistic, weaving back and forth between bizarre ambition and equally bizarre modesty.”[1] For Brooks, he posited, the ambition led him “to try his hand at the big prize without spending a few decades more of apprenticeship on artificial iguanas and tree sloths….” In essence, Dreyfus was suggesting that the faster path to AI development might have involved succeeding at a risky but worthwhile intellectual shortcut. Regardless of any academic debate on why it’s needed, humanoid robot development became a reality.

Now, two decades after Brooks’s first humanoid projects, many other labs and entrepreneurs have followed in his footsteps and created several styles of humanoid robots, with increasing success. In my own career as a product designer, I've had the honor of working on Simon and Curi, two of the upper-torso humanoid research robots currently in development at Georgia Tech's Socially Intelligent Machines Lab. There, the team studies how we might interact with robots the way we would with other humans—through speech, gesture, and touch. These robots can not only see objects and perform simple tasks (grabbing, recognizing, and sorting objects), but they can also identify human faces, understand social exchanges, and appropriately respond with blinks, nods, shrugs, and other responses. It’s this type of research that advances human experience in a real and practical way.

My work as a product designer typically focuses on everyday objects, such as cameras and vacuum cleaners, but these humanoids have shown me how robotics can play a role in everyday life. When I interact with Simon or Curi, I know they are machines made of plastic, metal and silicon, yet I still get lost in the sense that I am gazing into the eyes of a living, feeling entity.

For example, when I speak, Curi can look towards me, letting me know she heard me and is paying attention. When I hand her an object and ask her to sort it by color, she takes it from my hand with a solid, reassuring grasp, and holds it in front of her eyes, while her ears glow to match the object, letting me know that she knows what I want her to do. It’s an astounding interplay of nonverbal, gesture-based communication that is completely engaging. 

Having experienced firsthand the powerful emotional impact of human-robot interaction, I can see aspects of that experience making their way into small, but critical, moments of product behavior. In a sense, I’ve taken my own Rodney-Brooks-sized leap into humanoids. And because of that leap, I’m able to understand and apply human-machine interaction to products in ways that can be more easily introduced to contemporary culture. We may not need machines with arms and legs and eyelids in every domestic context, but perhaps a camera that winks at us to let us know a picture has been taken or a robotic vacuum cleaner that tells us it’s done with a song and dance, can have their roles in our homes.

But what’s the point, we may still ask? Is this just a continuation of a navel-gazing hubris that makes us want to play God and mimic humans? Why give products “cute” features like blinking, bowing, and nodding? Why should they have character and personality?

These seemingly gratuitous features do have real practical value. As our products get more sophisticated, we need more intuitive ways to understand them. The Holy Grail for designers is to have products that interact with us in a truly social, and thus natural, way. People are, of course, capable of learning commands and computer languages, but doing so has meant navigating a filter between us and our machines. It’s taken a while, but we’ve finally reached the point where robotics can be incorporated into everyday objects, so we can remove that filter and begin to communicate with machines using touch, gestures, and words. Broadly applying these natural human interactions doesn’t have to mean walking, talking robots, but rather some abbreviated version of a robot that’s appropriate to its role.

Bringing robotics into the design of everyday objects is about finding clever, natural ways to integrate light, sound, and motion. A successful example of light as intuitive and human communication is the Apple Macbook Pro sleep indicator light, which glows and fades at 12 pulses per minute, mimicking the pattern and pacing of human breath. It communicates with us intuitively, because we relate to it as a human activity. It speaks a language that we understand instantly, nonverbally, and without having to learn any commands or codes. (Apple was so aware of this it even patented this feature.)

Sound has also become a powerful means of robotic interaction. For example, users can operate Jawbone’s Jambox series of speakers completely through voice controls. They can even customize the personality of the voice to match mood and personal style, from “Rogue” to “Mobster” or classic digital “Arcade”. (I have mine speak to me in Italian.) Amazon just made a bold foray into this arena with the Amazon Echo, an entirely voice-based “command center” for the home that can search the web, order groceries, or control smart-home devices. She goes by “Alexa,” and your wish is her command.

Movement is the robotic attribute that is potentially the most compelling from a visceral point of view. As humans, we read a great deal of dramatic expression through simple movements. The overhead for building this type of behavior is quite high–motors are mechanically complex and require sophisticated drivers to control—but the payoff is great. And though it’s rare, we are starting to see it in our products.

The Polycom Eagle Eye videoconferencing camera, for example, moves automatically to find the gaze of the person who is speaking, and in a very effective and reassuring gesture, turns its head to hide its face when conferencing is over, letting conference participants know that it’s giving them their privacy.

And the Jibo robot, which acts as an all-purpose countertop family computer or tablet, uses motion as a core aspect of expression. With forms that look like an abstracted head and torso, it can turn towards a face when it recognizes a person, twist its head to look around the room, or bend forwards in a bow. Like the Echo, it’s an all-in-one command center, communicator, and fount of knowledge, but unlike the Echo, it has a screen and a camera, giving it a dynamic and highly emotive “face.” Between the visual interface behaviors and its motorized robotic motion, the effect is a mesmerizing living entity that can flirt, chuckle, or exchange glances and expressions.

The next few years will offer an exciting opportunity to see how people actually live and work with this new wave robotic products in their daily lives. [Editor’s note: Check out this POV, titled Smartphone tech paves the way to the robotic future, on our sister blog, OnQ.] Though we as humans may be limited to understanding only what we know from our own experience, mirroring ourselves in robots will continue to empower products that we interact with intuitively. The better we get at harnessing our human-ness through design, the better life with our products will be.

 

[1] Hubert Dreyfus quote: Franchi, Stefano and Güvn Güzeldere, Mechanical Bodies, Computational Minds: Artificial Intelligence from Automata to Cyborgs, MIT Press, December 2004.

 

Engage with us on

and