The remarkable performance advances in computing since the birth of the modern microprocessor can be largely attributed to Moore’s Law—the doubling of the number of transistors on a chip about every two years as technology advances allow for smaller transistors. What’s lesser known is that these advances have been accompanied by corresponding improvements in energy efficiency.
However, the energy-related benefits of Moore’s Law are slowing down as the miniaturization of transistors are now bumping against physical limits. This has forced the industry to consider alternative ways to improve processor performance and efficiency. With the development of new processor architectures, power efficient technologies, and power management techniques, AMD has its sights set on the goal of accelerating energy efficiency of its Accelerated Processing Units (APUs) 25x by 2020 (25x20).
With this 25x20 goal, the reduced power consumption of AMD’s products will outpace the historical efficiency trend predicted by Moore’s Law by at least 70 percent from 2014 to 2020. This means that in 2020, a computer could accomplish a task in one fifth of the time as a 2014 PC while consuming, on average, less than one-fifth the power. The following explains how AMD is working to achieve this goal. More details on the information in this blog and more can be found in a new white paper: AMD’s Commitment to Accelerating Energy Efficiency.
AMD’s APUs place both CPUs and GPUs on the same piece of silicon. This yields better efficiency by sharing the memory interface, and improves both power delivery and the cooling infrastructure. Many workloads, such as natural human interfaces and pattern recognition, benefit from the parallel execution capabilities of the GPU. Optimizing concurrent GPU and CPU operation delivers maximum performance, allowing a compute device to finish the task earlier and ultimately save power.
With Heterogeneous Systems Architecture (HSA), the CPU and GPU within the APU execute as peers. Added to this, AMD’s Heterogeneous Unified Memory Access (hUMA) enables the CPU and GPU to use the same memory, which has the added benefit of making coding far easier and overcoming a major hurdle for parallel programming. These capabilities reduce the handoffs between CPU and GPU, and reduce the number of instructions required to complete a task, thus saving power.
Power Efficient Silicon Technology
At the circuit level, a number of innovative approaches are used to help maximize the efficient use of silicon. One of these involves reducing the average voltage for a given frequency of operation. This is a big lever since power reduces with the square of voltage. To accomplish this, specialized integrated detectors observe a voltage dip and temporarily reduce the frequency in less than a nanosecond. When the voltage excursion has passed, full frequency is restored. Since these excursions are rare, there’s almost no compromise in computing performance, while power is cut by 10 to 20 percent.
Another innovation is Adaptive Voltage and Frequency Scaling, which involves the implementation of patented silicon speed capability and voltage sensors in addition to traditional temperature and power sensors. The new sensors will enable each APU to adapt to its particular silicon characteristics, platform behavior, and operating environment. By adapting in real time, the APU can optimize for maximum efficiency, squeezing up to 20 percent power savings at a given performance level.
To further reduce power use on the CPU, AMD has leveraged a high-density library similar in design style to a GPU. Using a high-density library can save power and area by 30 percent, and also frees-up space allowing AMD to place the GPU, a multimedia processor, and the system controller on the same chip.
AMD has designed power management algorithms focused on optimizing power for typical use conditions. These include a number of race-to-idle techniques to put a computer into sleep mode as quickly as possible thereby saving valuable energy that was previously wasted. By monitoring performance demands and coordinating activity between all components on the chip so all the work can be completed quickly, the power manager can put the processor into idle mode as frequently as between frames of a video playback or keystrokes while typing.
AMD’s power management also monitors the temperature of the silicon and the end-user machines. With this understanding, the APU can briefly increase the power output during compute-intensive jobs for much better response time while still avoiding overheating. Once complete, the power is reduced, lowering the device temperature. This practice helps yield better overall energy efficiency as tasks can be performed quicker and the machine can rapidly shift to idle mode.
Another capability incorporated in recent AMD APUs is around run-time entry of the processor into the extremely low-power “S0i3” state using power gating techniques. By doing this on the fly, the APU can often achieve “standby” equivalent power levels at sub-second time frames. This translates directly to lower average power consumption for typical use conditions.
AMD is committed to the 25x20 goal, and will leverage a portfolio of intellectual property around architectural innovation, power-efficient technology, and power management techniques. Combining these low power capabilities with the efficient hardware acceleration evidenced in recent AMD products, such as that for video, audio and GPU based computation, a path to achieving the goal can be seen.
These innovations are being incorporated into all of AMD’s APU products and will allow its processors to outpace the historical efficiency trend predicted by Moore’s Law by 70 percent between 2014 and 2020, despite the slowdown in silicon improvement.
To read my whitepaper, “AMD’s Commitment to Accelerating Energy Efficiency”, click here: http://www.amd.com/Documents/energy-efficiency-whitepaper.pdf
Sam Naffziger is a Corporate Fellow at AMD responsible for low power technology development, and has been the key innovator behind many of AMD’s low power features. He has been in the industry 27 years with a background in microprocessors and circuit design, starting at Hewlett Packard, moving to Intel and then at AMD since 2006. He received the BSEE from CalTech in 1988 and MSEE from Stanford in 1993 and holds 115 US patents in processor circuits, architecture and power management. He has authored dozens of publications and presentations in the field and is a Fellow of the IEEE. Links to third party sites, and references to third party trademarks, are provided for convenience and illustrative purposes only. Unless explicitly stated, AMD is not responsible for the contents of such links, and no third party endorsement of AMD or any of its products is implied.