2013年4月1日 星期一

About matching cores to demands in always on mobile applications

Single board computer, Panel PC, networking appliance,  

As the role of the mobile device continues to evolve, chip designers will face increased pressure to create processors that can handle next-generation computing. Designers need to look beyond single-core solutions to deliver powerful, energy-efficient processors that allow for the "always on, always connected" experience consumers want.

Mobile usage has changed significantly, with today’s consumers increasingly using their smartphones for the majority of the activities in their connected lives. This includes high-performance tasks such as Web browsing, navigation and gaming, and less demanding background tasks such as voice calls, social networking, and e-mail services. As a result, the mobile phone has become an indispensible computing device for many consumers.
At the same time, new mobile form factors such as tablets are redefining computing platforms in response to consumer demand. This is creating new ways for consumers to interact with content and bringing what was once only possible on a tethered device to the mobile world. What we’re seeing is truly next-generation computing.
As with any technology shift, designers must consider several factors to address the changing landscape, but in this case, a few issues stand out more than the rest as trends that will define where mobility is going.
Increased data
Consumers today desire an on-demand computing experience that entails having data available at the ready anytime. Gone are the days when consumers owned smartphones for the sole purpose of making phone calls. They now require a rich user experience allowing them to access documents, e-mail, pictures, video, and more on their mobile devices. Combined with the more than 37 billion applications already downloaded, data consumption continues to rise. According to a recent Cisco report, global mobile data traffic from 2011 and 2016 will grow to 10.8 exabytes (1 billion gigabytes) per month, and by 2016, video is expected to comprise 71 percent of all mobile data traffic.
Battery life
Mobile computing has always required a balance of performance and power consumption. The combination of smaller form factors and consumers demanding more out of their devices has led chip designers to develop ways around the power/performance gap. Without cutting power altogether, designers turn to techniques like clock scaling, where processor speeds vary based on the intensity of a task. Designers have also reverted to dual- and quad-core processors that decrease power while still delivering performance. As consumers continue to trend toward an “always on, always connected” experience, processors must become more powerful and more energy efficient.
Connectivity
The way consumers use computing devices is drastically changing, as their primary computing devices are no longer stationary, but carried around in their pockets, bags, and purses. The number of mobile connected devices will exceed the world’s population in 2012, according to industry studies. By 2016 there will be more than 10 billion mobile Internet connections around the world, with 8 billion of them being personal devices and 2 billion Machine-to-Machine (M2M) connections.
Implications for chips
So where is Moore’s Law going to take the embedded industry with this mobile revolution? History predicts a doubling every 18 months from thousands to billions of transistors, but actually looking at the performance of a single processor shows that it has all but stalled because the amount of power that can be consumed in the system has peaked.
For any single processor in the future, heat dissipation will limit any significant increase in speed. Once a device hits its thermal barrier, it will simply melt, or in the case of a mobile phone, start to burn the user. Apart from the physical aspects of heat dissipation, it is also hugely power inefficient. The amount of power it takes to tweak a processor to perform faster and faster becomes exponential, and the last little bit is especially expensive. Whereas in the past, double the size meant double the speed, double the size now equates to just a small percentage faster. That’s one of the reasons why designers have hit a limit for single-core systems.
Solving the power problem
If designers can’t make a single core go faster, the number of individual cores has to increase. This brings the benefit of being able to match each core to the demands being placed on it.
ARM’s big.LITTLE processing extends consumers’ “always on, always connected” mobile experience with up to double the performance and 3x the energy savings of existing designs. It achieves this by grouping a “big” multicore processor with a “little” multicore processor and seamlessly selecting the right processor for the right task based on performance requirements. This dynamic selection is transparent to the application software or middleware running on the processors.
The first generation of big.LITTLE design (see Figure 1) combines a high-performance Cortex-A15 multiprocessor cluster with a Cortex-A7 multiprocessor cluster offering up to 4x the energy efficiency of current designs. These processors are 100 percent architecturally compatible and have the same functionality, including support for more than 4 GB of memory, virtualizationextensions, and functional units such as NEON advanced Single Instruction, Multiple Data (SIMD) instructions for efficient multimedia processing and floating-point support. This allows software applications compiled for one processor type to run on the other without modification. Because the same application can run on a Cortex-A7 or a Cortex-A15 without modification, this opens the possibility to map application tasks to the right processor on an opportunistic basis.
Single board computer, Panel PC, networking appliance,
Figure 1: ARM’s big.LITTLE processing combines the high performance of the Cortex-A15 multiprocessor with the energy efficiency of the Cortex-A7 multiprocessor and enables the same application software to switch seamlessly between them.
(Click graphic to zoom by 1.9x)
As we continue to usher in this new era of computing, mobile phone designers will find themselves focusing on how to deliver devices that allow for increased data consumption, connectivity, and battery life. ARM’s big.LITTLE processing addresses the challenge of designing a System-on-Chip (SoC) capable of delivering both the highest performance and the highest energy efficiency possible within a single processor subsystem. This coupled design opens up the poten-tial for a multitude of new applications and use cases by enabling optimal distribution of the load between big and LITTLE cores and by matching compute capacity to workloads. 


沒有留言:

張貼留言