Sep 11, 2012
Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.
This article is one-half of a point-counterpoint with Ray Kurzweil's article, "The Man-Machine Merger."
I'm sitting in this café in Silicon Valley, watching conversations flowing between Macs, tablets, mobile devices, and their owners. I’m imagining the volume of information that is being streamed back and forth just from this one, small public space. That thought blows my mind.
Never have we had greater access to knowledge than we do right now—limitless information just a few clicks away, the line between man and machine increasingly blurred. But what are we sacrificing when we tether our brains to our mobile devices, relying on them to tell us when we’re hungry, where we should go for cocktails, what driving route we should take, and what we should buy our significant others for their birthdays? Is all of this connectivity helping us to evolve into a more intelligent species, as some futurists speculate, or is this actually hurting us?
An even bigger question: As we surrender our cognitive independence to our devices in an effort to make our lives easier, what is happening to our humanity? Is it a tradeoff between greater intelligence and loss of humanity?
We have this amazing and wondrous thing called a brain, and yet as we make increasingly greater strides in technological innovation, we are tempted to use this masterful tool less and less. If you use technology at every opportunity as a replacement for critical thinking or problem solving, in time, those skills will begin to lose their edge.
Your brain is like a muscle. If you stop using your cognitive skills and instead rely on technology to do all of your thinking for you, in time, those skills will start to atrophy. Use GPS to direct you everywhere you go? Your spatial skills will gradually start to worsen. Rely on autocorrect and spell-check for every bit of typed communication? Your spelling skills will start to suffer. A New York Times article explains that with the invention of cell phones and computer technology, the ability of young Japanese to read and write kanji has seen a drastic decline; technology provides the characters for you, so there is no need to learn how to draw them by hand.
The worsening of our basic cognitive abilities is bad enough, but I see an even more problematic issue associated with the increasing use of technology, and this has to do with the idea of augmenting intelligence. Ray Kurzweil, a prominent futurist, believes that the key to advancing human intelligence is the "singularity," or the merging of man and machine. He thinks that by combining the computational abilities of a computer with the average man, a race of super-intelligent humans will emerge. However, I think he’s wrong about this. Intelligence is not equal to computational power or processing speed.
The fact is, there’s a reason why computers haven’t yet reached human level intelligence, and it has nothing to do with how fast they can compute, or how much power we can load them with. It’s because humans have something that computers don’t, something that’s a pretty significant component of intelligence that many people are all too quick to disregard. This critical element? Creativity.
When we over-rely on technology to do our thinking for us, not only are our cognitive skills losing their edge, but our creativity can suffer as well. Why do we care about creativity? For one thing, creativity is at the root of our ability to problem-solve novel situations. Creativity is what we use when we're presented with a new problem and need to figure out the best course of action. When we let our devices make all of these decisions for us, we stop utilizing those problem-solving skills.
What happens if your device breaks, or makes an error? If you don’t keep those skills sharp, what makes you think you’ll be able to make a quick and accurate decision when faced with a new dilemma?
Computers have been trained to paint pictures and compose music, but they have not yet mastered creative cognition—thinking divergently, going back and forth between conventional and unconventional thinking, making errors, and deciding on the best, most useful response, given a particular situation. On this level, creativity can be seen as intelligence that is emergent from serendipitous error. But this isn’t how we program computers—we want them to be error-free.
However, error makes us human. And error is essential for creativity. To eliminate the possibility of human error by allowing technology to make decisions for us, our ability to think creatively will suffer. If mindless acceptance of data provided to us by our devices in the absence of critical thinking or creative problem solving becomes the norm, we are edging away from humanity and closer to becoming robots. Is this really what we aspire to do? As a robopsychologist, I aim for the opposite; I strive to develop AI that thinks more like humans, capitalizing on serendipity and error, in order to learn more effectively.
I’m not saying technology is all bad, just that it can be used for evil if we aren’t conscious about how and when we use it. What’s more important—to be able to perform faster computations, or come up with a creative solution to a problem? I feel that today, creativity is a more critical skill.
But then again, everything is a tradeoff.
There are ways to make technology work to your intellectual advantage. On the one hand, using autocorrect while you’re writing may hurt your long-term spelling skills, but it allows you to communicate faster in the short term—maybe even better—if you don’t have to spend all your cognitive capital thinking about how to spell words and focus instead on narrative. It might even enhance creativity in this case.
Consider as well the calculator that won’t give you the correct answer until you make an educated guess about what the right answer is. It allows you to benefit from the speed and accuracy of technology, but forces you to remember how and why you are doing those calculations in the first place. In other words, it prevents you from getting too cognitively lazy and it teaches you math, not just calculations.
Do I see the rise of technology as the Intellectual Apocalypse? Not necessarily. The best way to make technology work for you instead of against you is to be smart about it—utilize it in order to allow you the time and mental energy to engage in higher-level cognitive activities, not as a crutch because you don’t feel like activating your neurons. If you are creative about how you channel and process the limitless supply of information out there—remaining conscious about how, why, and when you use it—I believe technology can actually increase your intelligence. But don’t ask your device how to make that happen—figure that one out for yourself. Your brain will thank you.
This article is commissioned by Qualcomm Incorporated. The views expressed are the author's own.