People always want more. That’s just the way it is. In the case of computing devices, mobile or otherwise, we’ll always want better user experiences and more performance at affordable prices. So far, the industry has delivered. And we’re all growing to expect it. But is there an end? Is there a point where the mobile experience reaches its peak?
At Qualcomm, we are excited to see many emerging mobile experiences and applications coming, such as computational photography, augmented reality, realistic physics, and contextual awareness. These experiences are not only computationally intense, but they also invoke new types of diverse workloads with diverse requirements.
In augmented reality, for example, your mobile device has to continuously analyze the camera feed, recognize and track interesting objects, locate them in 3D space, and superimpose perspective-corrected overlay images—the “augmented” part. These diverse workloads demand a lot of compute horsepower! In addition, these diverse workloads are computed with evolving algorithms, which means that the processors need to have some level of programmability. Although programmability gives the flexibility to compute diverse algorithms, it also comes at the cost of power.
So, the challenge is to provide these emerging mobile experiences while still satisfying the key mobile device constraints that consumers desire: a sleek, ultra-light device that stays cool and delivers long battery life.
Some tech savvy people might think that the CPU is the answer, but that’s only part of the answer. As I alluded to earlier, the CPU’s immense flexibility and programmability comes at the price of power. In fact, previous methods of scaling the CPU to address the increased compute requirements, within the power and thermal constraints of mobile devices, have delivered diminishing returns. Let’s take a look at a couple of these methods:
- Single-core CPU scaling improved compute performance by increasing the CPU clock frequency and increasing instructions per clock (IPC) through architecture improvements. CPU clock rates are flattening. Remember the CPU GHz race in the PC market? Well, that race slowed many years ago, and the max frequency is now saturating. IPC increases have also slowed due to the increasing micro-architectural complexity to squeeze out more performance, which is not only challenging but also power hungry.
- Multi-core CPU scaling was the next step to scale computing performance and deal with the issue of clock rates flattening out. By duplicating CPU cores, semiconductor companies took advantage of the extra transistors and scaled the overall maximum theoretical compute performance. However, the ability to take advantage of this increased performance depends on being able to run multiple programs or threads in parallel. You just need to look at Amdahl’s Law to see how quickly you get diminishing returns on performance improvement when programs have sequential code. In the PC market, the number of CPU cores has mainly settled at quad-core. In addition, since the CPU, as I already noted, is not necessarily the most efficient processor, running multiple CPUs at maximum clock frequency for a sustained period is very challenging in a thermally limited mobile form factor (i.e. things get hot!).
So how are we going to continue scaling compute going forward? Just like we have done over and over in the past, we need to change our computing approach so that we can keep increasing the compute performance and give consumers the mobile experiences that they want.
At Qualcomm, we like to the give people what they want. So, what’s next? Well, over the next several months I’m going to explain why mobile heterogeneous computing is the next paradigm in mobile computing. By intelligently utilizing appropriate processors, heterogeneous computing improves app performance, battery life, and thermal efficiency to enable the evolution of new mobile experiences.
Look for future blogs and webinars to learn about Qualcomm’s view on heterogeneous computing.