October 25, 2013Adam Kerin
In our previous post, we discussed the thermal and power limitations of modern day smartphones. The power and performance management decisions between smartphones and PCs are vastly different. Smartphones today are compute powerhouses, are only a few millimeters thin and are held in your hand and housed in your pocket. Furthermore, the majority of the day they are running on battery rather than tethered to an outlet. This is why power management plays such a vital role in smartphones.
This blog installment continues our series on smartphone power and dives deeper into the power and performance management of Krait CPUs inside Snapdragon processors.
Effective smartphone power management helps deliver a long lasting battery life and a cool skin temperature. The amount of power (P) a CPU draws is a function of voltage (v), frequency (f), and capacitance (c). (For more information about measuring and isolating SoC power, please refer to our initial post on best practices.)
P = v2 x f x c
The capacitance, or the amount of stored electrical charge, that is switched per each clock transition is a relatively fixed value depending on the design. Many variables contribute to the amount of capacitance, including the length of the wires the signals take across the chip and the transistor gates. Because these values are fixed in silicon, they and capacitance are obviously not controllable while the smartphone is running, or “runtime.”
However, frequency and voltage are controllable during runtime. At the CPU core level, dynamic clock and voltage scaling (DCVS) is a technique used to adjust the frequency and voltage of the power equation to deliver the needed performance at the ideal power level.
Voltage is a significant and exponential (v2) contributor to CPU power draw, so more power reductions are found here compared to a proportional drop in frequency. Voltage reductions have a limitation, because this impacts the on/off transition time of the transistor. If that rising/falling transition is delayed enough so that the propagation time of the signal does not arrive to the stage of the pipeline before the next clock transition, the instruction will fail. Frequency reductions do not contain the same design risks as modulating voltage, but there are performance impacts.
Varying voltage and frequency to reduce CPU power draw has become even more of an industry challenge as modern smartphones contain multiple CPU cores. Qualcomm Snapdragon processors take DCVS a step-further with Asynchronous Multiprocessing (aSMP). Generally, designs from competitors use DCVS, but each core is forced to follow the other’s frequency and voltage settings. The Krait CPU cores of Snapdragon processors lie on separate voltage and frequency planes. This allows each CPU core to hit independent frequencies and voltages, delivering scalable performance and power levels.
For example, if the smartphone is running a single threaded task and needs maximum performance from only one CPU core, designs from competitors force all four CPU cores to their max frequency even though only a single CPU core is needed. This inefficient use of resources unnecessarily burns power and impacts smartphone battery life. With aSMP inside of Snapdragon processors, only one CPU core would spin up while the others remain in a low power state until needed.
The vast majority of the time, you want your smartphone to take advantage of power management features like DCVS and aSMP to deliver the best battery life possible. Other times to deliver a better user experience, you want all of the horsepower of the CPU unleashed and not to use battery saving features.
For those situations, maximum performance is achieved through Qualcomm Technologies’ Performance Mode. Performance Mode enables applications to change CPU parameters to deliver optimum performance. This includes locking the CPU frequency, requesting additional CPU cores, and setting the duration of these events. It is important to note that Performance Mode only impacts the CPU settings mentioned above. It does not allow overclocking of the CPU nor does it apply to the GPU, DSP, RAM, or other cores of the SoC. Finally, even when the battery saving features aren’t enabled, thermal trips and thermal management are still engaged and protecting the SoC and smartphone. In other words, the thermal thresholds are still obeyed during this mode, staying true to the smartphone design.
Performance Mode is available to our OEMs, who can enable it for built-in Java and Native applications, giving OEMs the flexibility to make power and performance trade-off decisions based on factors such as industrial design, feature set and application focus. One area where our OEMs have seen considerable benefits using Performance Mode are in touch screen, camera and scrolling/browsing scenarios, where instant response times are essential for a great user experience.
For example, while a user is in the e-mail or web browser, chances are they will scroll and explore the content. There is a slight delay, measured on the order of a few hundred milliseconds, from the time your finger touches the screen to the time the displayed content responds. If it appears like the content does not instantly react to the touch, it is noticeable and frustrating for most users. Another performance sensitive period is after the web or mail content has started scrolling. If frames are dropped, the transition appears to be choppy, and it can also be frustrating.
Screen rotation is another example where transitions should occur rather quickly. Consumers want a fast, fluid experience. Exiting the battery saver mode to Performance Mode as soon as the user interacts with the screen or phone helps deliver those snappy and smooth experiences.
Taking pictures is also a scenario where minimized latency is important for a great experience. During the camera’s “continuous” or “snapshot” mode, the phone rapidly captures a series of photos. This mode is popular for dynamic action shots, like a snowboarder performing a 360. Wavelet Noise Reduction (WNR) is a post-processing technique to improve the image quality. However, this operation must finish before the next photo can be captured. The Snapdragon 800 enables this feature in hardware, but those lower in the performance tiers like Snapdragon 600 perform WNR in software. In this case, the Hexagon DSP is used in conjunction with the CPU cores, so Performance Mode is called to complete the WNR faster. In turn, this allows users to take more photos per second and capture more of the action.
Beyond directly benefitting the user experience, Performance Mode can be used to enable optimal performance for synthetic benchmarks. Synthetic benchmarks are not real-world applications, but are used to represent certain usages, often with a focus on the performance of specific components within the SoC. The usual metrics are time based (seconds or frames-per-second). Not utilizing the battery saving features ensures the maximum performance of the CPU’s capabilities is demonstrated.
Let’s compare this to a more widely known benchmark that literally assesses a devices’ horsepower. If you’re a professional racer or are just going through a mid-life crisis, you might be eyeing a red Nissan GTR as your next purchase. Sports car manufacturers often boast their 0-60 miles-per-hour or quarter-mile time measurements. In this case, the 2012 GTR took only 2.7s to reach 60mph and 11.1s to reach a quarter-mile. These metrics highlight how fast a car can accelerate and its max speed but doesn’t reflect miles-per-gallon (MPG). MPG is a useful metric, but not for a speed test.
Similarly, when executing a benchmark intended to assess performance, the CPU should be at its maximum performance level. If power management were still enabled, the CPU would gradually climb to its max frequency. That would be analogous to placing gradually increasing speed limit restrictions along the quarter-mile track. Like the sport car speed test, the “pedal” of the CPU should also be floored.
With that said, synthetic benchmarks by definition are not real-world applications and they can be misleading or abused. It is ideal to test applications that represent the end user experience.
Furthermore, in a modern SoC, all the major components or “engines”, the CPU, GPU, Modem, DSP, etc. are integrated onto a single die. The CPU consumes only about 15% of the die area. The remaining engines are also at the center of a great user experience and also have their own power and performance management technologies. You can read more about these features in our previous post, and stay tuned for an upcoming post dedicated to the modem power management features.
Overall, all components of Snapdragon processors are designed to provide high peak performance capabilities tempered with a careful set of performance points to back off as needed to meet the constraints of the leading edge smartphones.
Our next post will explore some of the new user experiences enabled by these and other powerful Snapdragon features.
Snapdragon4Snapdragon Digital MarketingnoneSnapdragon
October 25, 2013Power vs. Performance Management of the CPU 0Power vs. Performance Management of the CPU