Driven by the thirst for 3D gaming in consumer electronics, current graphics processing units (GPUs) have evolved into powerful, programmable vector processors that can speed up a wide variety of software applications. These "general-purpose GPUs," as they are known, are no longer limited to the consumer market. They are making their embedded computer into the embedded market with the arrival of the new AMD Embedded G-Series platform.
A portable atomic clock is just the ticket for many UAVs, and the more
SWaP-optimized the better. The Chip-Scale Atomic Clock (CSAC) fits the
bill with the low power draw and accurate performance inherent in its design.
Unmanned Aerial Vehicles
(UAVs) began as tools for military surveillance. As their capabilities
expanded, they found usage in civilian applications such as border
patrols and drug interdiction, while on the military side the expanded
capabilities led to missions using armed UAVs.Throughout
their use, accurate clocks have been required for UAVs to carry out
their missions. A principal need has been navigation; UAVs typically use
a clock that has been synchronized to Global Positioning System (GPS)
for very accurate timing. However, when the GPS signal is lost, the
clock is used to provide a “holdover” function that integrates with a
backup navigation system, usually some form of anInertial Navigation System(INS).
The clock’s holdover performance is important because, in military
applications, GPS signal loss is sometimes due to intentional jamming,
which can persist for long periods of time.Accurate clocks are also needed inUAVcommunications. As UAVsensorpayloads have advanced from still photos to video, to video integrated with infrared and other sensor data, high-densityencryptedwaveformshave been employed to transmit this data, as well as to receivevehiclecontrol data. These waveforms can only stay synchronized with stable, accurate clocks.Layered on top of these application requirements are the demands of Size, Weight, and Power (SWaP). Almost everycomponentin the electronics of a UAV – whether part of the basic airframe or part of the specialized payload – is being pushed to reduce SWaP so that a given UAV can increase its mission duration (for more “persistent surveillance” in military terminology), or so that it can add more sensor capabilities without shortening mission duration. The choice of clock onboard can positively or negatively affect SWaP in UAV design. ............ refer to http://smallformfactors.com/articles/chip-scale-swap-design-challenges/#at_pco=cfd-1.0
ACROSSER Technology, a world-leading networking communication designer and manufacturer, launches ANR-IB751N1/A/Bnetworking appliances.
ANR-IB751N1/A/B networking appliances are the latest in scalable Intel
3rd generation Core i7/i5/i3 processors (formerly code-named Ivy
Bridge). They feature a 1U rackmount
chassis, maximum 16GB DDR3 memory, 8 x GbE ports, optional 2 or 4 x
Fiber SFP LAN ports, 2 pairs LAN bypass, 2 x USB3.0 ports, 2 x SATA
ports, and console port.
New Atom series solutions which include AMB-D255T1Mini-ITX industrial mainboard and AMB-N280S1 fanless 3.5-inch single board computer. AMB-D255T1 is equipped with an Intel D2550 Atom processor. AMB-N280S1 is equipped with an Intel N2800 Atom. Both have a 5~7 year product warranty.
This seems to be the year for milestone events in the EDA industry,
though calculations show some of the “anniversary” designations to be
premature. Nevertheless, the first big EDA event of the year is the
Design and Verification Conference (DVCon),
held in San Jose, CA every February. DVCon celebrated its 10th
anniversary this year, after a transformation from HDLcon in 2003, which
followed the earlier union of theVHDLInternational User’s Forum and InternationalVerilogHDL Conference. Those predecessor conferences trace their origins back 25 years and 20 years, respectively.
After DVCon, EDA marketers quickly turn to preparations for the JuneDesign Automation Conference(DAC), perhaps with a warm-up at Design, Automation, andTestin
Europe (DATE) in March. DAC is the big show, however, and this year
marks the 50th such event (and its 49th anniversary). Phil Kaufman Award
winner Pat Pistilli received the EDA industry’s’ highest honor for his
pioneer work in creating DAC, which grew from his amusingly-named
Society to Help Avoid Redundant Effort (SHARE) conference in 1964.
Milestones
inevitably lead to some reflection, but also provide an opportunity to
look forward to what the future will bring. In our 2nd annual EDA Digest
Resource Guide, we will be asking EDA companies to share what they see
as the biggest challenges facing the industry in the next five years,
and how the industry will change to meet those challenges. Will future
innovations be able to match the impact of the greatest past
developments in EDA, which enabled the advances in electronics that we
benefit from today?
To put that question in perspective, I’ve been developing a Top 10 list of the most significant developments in the history of EDA, based on my personal experiences over the course of my career. That doesn’t go back quite as far as Pat Pistilli’s, but I have seen many of the major developments in EDA first hand, going back to when I started as an IC designer at Texas Instruments. (This was a few years after we stopped cutting rubylith, in case you were wondering.)
We will also be conducting asurveyof readers, and will publish the results in the EDA Digest Resource guide in time for DAC-50. To get things started, here are the first five EDA breakthroughs on my list, roughly in historical order.
As
the role of the mobile device continues to evolve, chip designers will face
increased pressure to create processors that can handle next-generation
computing. Designers need to look beyond single-core solutions to deliver
powerful, energy-efficient processors that allow for the "always on, always
connected" experience consumers want.
Mobile usage has changed significantly, with today’s
consumers increasingly using their smartphones for the majority of the
activities in their connected lives. This includes high-performance tasks such
as Web browsing, navigation and gaming, and less demanding background tasks such
as voice calls, social networking, and e-mail services. As a result, the mobile
phone has become an indispensible computing device for many consumers.
At the same time, new mobile form factors such as tablets are
redefining computing platforms in response to consumer demand. This is creating
new ways for consumers to interact with content and bringing what was once only
possible on a tethered device to the mobile world. What we’re seeing is truly
next-generation computing.
As with any technology shift, designers must consider several
factors to address the changing landscape, but in this case, a few issues stand
out more than the rest as trends that will define where mobility is going.
Increased data
Consumers today desire an on-demand computing experience that
entails having data available at the ready anytime. Gone are the days when
consumers owned smartphones for the sole purpose of making phone calls. They now
require a rich user experience allowing them to access documents, e-mail,
pictures, video, and more on their mobile devices. Combined with the more than
37 billion applications already downloaded, data consumption continues to rise.
According to a recent Cisco report, global mobile data traffic from 2011 and
2016 will grow to 10.8 exabytes (1 billion gigabytes) per month, and by 2016,
video is expected to comprise 71 percent of all mobile data traffic.
Battery life
Mobile computing has always required a balance of performance
and power consumption. The combination of smaller form factors and consumers
demanding more out of their devices has led chip designers to develop ways
around the power/performance gap. Without cutting power altogether, designers
turn to techniques like clock scaling, where processor speeds vary based on the
intensity of a task. Designers have also reverted to dual- and quad-core
processors that decrease power while still delivering performance. As consumers
continue to trend toward an “always on, always connected” experience, processors
must become more powerful and more energy efficient.
Connectivity
The way consumers use computing devices is drastically
changing, as their primary computing devices are no longer stationary, but
carried around in their pockets, bags, and purses. The number of mobile
connected devices will exceed the world’s population in 2012, according to
industry studies. By 2016 there will be more than 10 billion mobile Internet
connections around the world, with 8 billion of them being personal devices and
2 billion Machine-to-Machine (M2M) connections.
Implications for chips
So where is Moore’s Law going to take the embedded industry
with this mobile revolution? History predicts a doubling every 18 months from
thousands to billions of transistors, but actually looking at the performance of
a single processor shows that it has all but stalled because the amount of power
that can be consumed in the system has peaked.
For any single processor in the future, heat dissipation will limit any significant
increase in speed. Once a device hits its thermal barrier, it will simply melt,
or in the case of a mobile phone, start to burn the user. Apart from the
physical aspects of heat dissipation, it is also hugely power inefficient. The
amount of power it takes to tweak a processor to perform faster and faster
becomes exponential, and the last little bit is especially expensive. Whereas in
the past, double the size meant double the speed, double the size now equates to
just a small percentage faster. That’s one of the reasons why designers have hit
a limit for single-core systems.
Solving the power problem
If designers can’t make a single core go faster, the number
of individual cores has to increase. This brings the benefit of being able to
match each core to the demands being placed on it.
ARM’s big.LITTLE processing extends consumers’ “always on,
always connected” mobile experience with up to double the performance and 3x the
energy savings of existing designs. It achieves this by grouping a “big” multicore
processor with a “little” multicore processor and seamlessly selecting the right
processor for the right task based on performance requirements. This dynamic
selection is transparent to the application software or middleware running on the processors.
The first generation of big.LITTLE design (see Figure 1)
combines a high-performance Cortex-A15 multiprocessor cluster with a Cortex-A7
multiprocessor cluster offering up to 4x the energy efficiency of current
designs. These processors are 100 percent architecturally compatible and have
the same functionality, including support for more than 4 GB of memory, virtualizationextensions, and functional units such as NEON advanced Single Instruction,
Multiple Data (SIMD) instructions for efficient multimedia processing and
floating-point support. This allows software applications compiled for one
processor type to run on the other without modification. Because the same
application can run on a Cortex-A7 or a Cortex-A15 without modification, this
opens the possibility to map application tasks to the right processor on an
opportunistic basis.
Figure 1: ARM’s big.LITTLE processing combines
the high performance of the Cortex-A15
multiprocessor with the energy efficiency of the Cortex-A7 multiprocessor and
enables the same application software to switch seamlessly between
them.
(Click graphic to zoom by
1.9x)
As we continue to usher in this new era of computing, mobile
phone designers will find themselves focusing on how to deliver devices that
allow for increased data consumption, connectivity, and battery life. ARM’s
big.LITTLE processing addresses the challenge of designing a System-on-Chip (SoC) capable of delivering both the highest
performance and the highest energy efficiency possible within a single processor
subsystem. This coupled design opens up the poten-tial for a multitude of new
applications and use cases by enabling optimal distribution of the load between
big and LITTLE cores and by matching compute capacity to workloads.
The speed of innovation in automotive IVI is making a lot of heads turn.
No question, Linux OS and Android are the engines for change.
The open source software movement has forever transformed the mobiledevicelandscape.
Consumers are able to do things today that 10 years ago were
unimaginable. Just when smartphone and tablet users are comfortable
using their devices in their daily lives, another industry
is about to be transformed. The technology enabled by open source in
this industry might be even more impressive than what we’ve just
experienced in the smartphone industry.
The industry isautomotive, and already open source software has made significant inroads in how both driver and passenger interact within theautomobile. Open source stalwartsLinuxand Google are making significant contributions not only in the user/driver experience, but also insafety-criticaloperations,vehicle-to-vehicle communications, and automobile-to-cloudinteractions.
Initially,
automotive OEMs turned to open source to keep costs down and open up
the supply chain. In the past, Tier 1 suppliers and developers of
In-Vehicle Infotainment (IVI) systems would treat an infotainment center
as a “black box,” comprised mostly of proprietary software componentsand dedicated hardware. The OEM was not allowed to probe inside, and had no ability to “mix and match” thecomponentparts.
The results were sometimes subquality systems in which the automotive
OEM had no say, and no ability to maintain. With the advent of open
source, developers are now not only empowered to cut software
development costs, but they also have control of the IVI system they
want to design for a specified niche. Open source software, primarily
Linux and to some extentAndroid,
comprises open and “free” software operating platforms or systems. What
makes Linux so special are the many communities of dedicated developers
around the world constantly updating the Linux kernel. While there are
many Linux versions, owned by a range of open source communities and
commercial organizations, Android is owned and managed exclusively by
Google.
To
understand the automotive IVI space, it’s best to look at the
technology enabled by Linux and what Android’s done to further advance
automotive multimedia technology.
Linux OS – untapped potential at every turn
There are many standards bodies and groups involved in establishing Linux in the automobile – not just in IVI, but in navigation, telematics, safety-critical functions, and more. The Linux Foundation, a nonprofit organization dedicated to the growth of Linux, recently announced the Automotive Grade Linux (AGL) workgroup. The AGL workgroup facilitates widespread industry collaboration that advances automotive device development by providing a community reference platform that companies can use for creating products. Jaguar Land Rover, Nissan, and Toyota are among the first carmakers to participate in the AGL workgroup.
Another Linux initiative, the GENIVI Alliance, was established to promote the widespread adoption of open source in IVI. The goal behind GENIVI is to allow collaboration among automakers and their suppliers across a single unified ecosystem, to streamline development, and keep costs down. The organization has flourished since its formation in 2009, and today it has more than 165 members. The GENIVI base platform (Mentor Embedded is compliant with version 3.0) accommodates a wide range of open source building blocks demanded by today’s developers.
Linux has further opened up the possibilities with safety-critical operations and multimedia communications. Hardware companies have followed suit with more IVI functions built onto a single piece of silicon, improvingsecurityand performance.
The available power ofmulticoreSoChardware hosting a Linuxoperating systemis fueling rapid expansion in vehicle software in the area of telematics. In Europe, for example, by 2015, all newcarsmust be equipped with the eCall system, to automatically alert emergency services in the event of a collision. Services such as insurance tracking, stolen vehicle notification, real-time cloud data (traffic, weather, road conditions ahead), car-to-car communication, driverless car, diagnostics, and servicing are also made available via in-car Internet services. To operate in this space, IVI hardware needs to have multicore processor support, GPU/high-performance graphics with multiple video outputs, Internet connectivity, and compatibility with existing in-car networks such asCAN, MOST, and AVB. Several components are already on the market, and the future potential is exciting.
Consolidating multiple functions into a single Linux-based Electronic Control Unit (ECU) allows for a reduction in component count, thereby reducing overall vehicle costs. Maintenance becomes easier. And the wire harness costs are reduced as the total ECU count drops. As Linux becomes more widespread in vehicles, additional technologies will consolidate – for example, instrument clusters and AUTOSAR-based ECUs may coexist with infotainment stacks. It’s also important to realize that the complexity of software and the amount of software code used will only increase as these new technologies become standard. Already more than 100 million lines of code are used in the infotainment system of the S-Class Mercedes-Benz and according to Linuxinsider.com, and that number is projected to triple by 2015 (Figure 1).
Figure 1:Software complexity in IVI systems continues to grow. Today, the IVI system of an S-Class Mercedes has 100m lines of code. By 2015, it is expected to be 300m. A Linux-based solution, capable of scaling to handle the complexity, is mandatory.
Android apps hit the road
The Android operating system, on the other hand, was designed from the start to support mobile devices and has proved that it can serve more than mobile phones. Using the Android OS for in-vehicle entertainment provides all the entertainment features offered by a top-of-the-range, in-dash infotainment system with the addition of informative, driver-assisting content including hands-free calling, multimedia center, and a navigation system/Google maps. For an open source expandable system (whereby the framework can be extended and applications can be developed for it), the Android OS can be enhanced to support multiple audio and video feeds. For example, IVI audio requirements include music, phone calls, sensor warnings, and navigation announcements, which must be managed and prioritized. Managing multiple displays, with an information-focus for the driver and entertainment-focus for passengers, is also a requirement. The UI for the driver should be arranged to minimize distraction, while passengers will want as much content as possible from their UIs. But many automotive OEMs and developers ask, “Why not just use the Android smartphone and tie it into a vehicle’s dash?” Not only would this be more cost effective for the developer, but the user would have instant familiarity with the system.
One organization promoting the use of the smartphone as an IVI in-dash system is the Car Connectivity Consortium (CCC). The CCC provides standards and recipe books for tethering a smartphone to the infotainment head unit. The CCC members implement MirrorLink (Figure 2), a technology standard for controlling a nearby smartphone from thein-car infotainment system screen or via dashboard buttons and controls. This allows familiar smartphone-hosted applications and functions to be easily accessed. CCC members include more than 80 percent of the world’s automakers, and more than 70 percent of global smartphone manufacturers. The MirrorLink technology is compatible with Mentor Embedded’s GENIVI 3.0 specification Linux base platform solution.
Figure 2:An example of smartphone in-dash tethering:Driversuse the same smartphone apps in the vehicle as they do on their own smartphone, which provides a great deal of familiarity.
A recent example of smartphone tethering can be found in certain subcompactmodelsfrom U.S. auto manufacturer General Motors. Select Chevrolet models carry the “MyLink” in-dash infotainment system.
From both a cost and ease-of-use perspective, tethering a smartphone makes a lot of sense. But there’s another reason to consider. Some automotive manufacturers are nervous about being too dependent on Google – as Google is the sole provider and owner of the Android mobile platform. Android built into an IVI system is an 8- to 10-year commitment, and a lot can happen in that time regarding license fees or terms of use.
Linux and Android driving together?
Despite the strengths of and differences between these two popular platforms, recent embedded architecture developments now allow the Linux and Androidoperating systemsto happily coexist. And this might be a very good thing. For example, Android can be hosted on top of Linux using Linux Container Architecture (LXC) (Figure 3). The resources, access control, and security of the Android client are managed by the host Linux operating system. For system designers concerned about the security of Android, this represents a good way to offer Android app access, and keep other system functions on a standard Linux platform. MulticoreSystem-on-Chip(SoC) platforms make this architecture even more attractive, as there are sufficient resources for both Linux and Android domains to perform well simultaneously. The CPU resources can be shared, along with memory,graphics processingresources, and other peripherals. The output of the two domains can be recombined into a common Human Machine Interface (HMI) allowing the user to select functions from both domains.
Figure 3:There are several ways to include Android (Android apps) in a Linux-based IVI solution. One method, which is becoming increasingly more popular, is using Linux Container Architecture. Here, Android sits as a guest OS on top of the Linux kernel. Privileges and permissions are tightly controlled.
Exciting times ahead
Both Linux and Android are extremely versatile and powerful operating systems worthy of consideration in IVI systems. We are still in the infancy stages in what these two open source platforms can do for IVI. Now is the perfect time to starting developing or to join a consortium so that you too can reap the fruits of what IVI promises down the road.
Given the increased complexity of processors and applications, the current generation of Operating Systems (OSs) focuses mostly on software integrity while partially neglecting the need to extract maximum performance out of the existing hardware.
Processors perform as well as OSs allow them to. A computing platform,embeddedor otherwise, consists of not only physical resources – memory, CPU cores, peripherals, and buses – managed with some success by resource partitioning (virtualization), but alsoperformanceresources such as CPU cycles, clock speed, memory and I/O bandwidth, and main/cache memory space. These resources are managed by ancient methods like priority or time slices or not managed at all. As a result, processors are underutilized and consume too much energy, robbing them of their true performance potential.
Most existingmanagementschemes are fragmented. CPU cycles are managed by priorities and temporal isolation, meaning applications that need to finish in a preset amount of time are reserved that time, whether they actually need it or not. Because execution time is not safely predictable due to cache misses, miss speculation, and I/O blocking, the reserved time is typically longer than it needs to be. To ensure that the modem stack in a smartphone receives enough CPU cycles to carry on a call, other applications might be restricted to not run concurrently. This explains why some users of an unnamed brand handset complain that when the phone rings,GPSdrops.
Separate from this, power management has recently received a great deal of interest. Notice the “separate” characterization. Most deployed solutions are good at detecting idle times, use modes with slow system response, or particular applications where the CPU can run at lower clock speeds and thus save energy. For example, Intel came up with Hurry Up and Get Idle (HUGI). To understand HUGI, consider this analogy: Someone can use an Indy car at full speed to reach a destination and then park it, but perhaps using a Prius to get there just in time would be more practical. Which do you think uses less gas? Power management based on use modes has too coarse a granularity to effectively mine all energy reduction opportunities all the time.
Ideally, developers want to vary the clock speed/voltage to match the instantaneous workload, but that cannot be done by merely focusing on the running application. Developers might be able to determine minimum clock speed for an application to finish on time, but can they slow down the clock not knowing how other applications waiting to run will be affected if they are delayed? Managing tasks and clock speed (power) separately cannot lead to optimumenergy consumption. The winning method will simultaneously manage/optimize all performance resources, but at a minimum, manage the clock speed and task scheduling. Imagine the task scheduler being the trip planner and the clock manager as the car driver. If the car slows down, the trip has to be re-planned. The driver might have to slow down because of bad road conditions (cache misses) or stop at a railroad barrier (barrier in multithreading, blocked on buffer empty due to insufficiently allocated I/O bandwidth, and so on). Applications that exhibit data-dependent execution time also present a problem, as the timing of when they finish isn’t known until they finish. What clock speed should be allocated for these applications in advance?
With the touch-screen sector now entering a new phase of innovation, the issue of applying multitouch operation to the larger format displays found in industrial and public use settings is becoming a key engineering concern. Designers must examine the sensor technology options available today and consider using new single-layer project capacitive sensing technology to enable sophisticated human-machine interactions in large displays destined for harsh environments.
Multitouch sensor technology has the potential to revolutionize the way we connect with all manner of electronics hardware, giving touch-screen-based Graphical User Interfaces (GUIs) the ability to recognize complex gestures using several fingers such as rotating, two-digit scrolling, three-digit dragging, and pinch zoom, as well as allowing multiple users to collaborate. Analyst firm Markets & Markets predicts that the global multitouch business will reach $5.5 billion by 2016 (constituting more than 30 percent of the totaltouch panel market by this stage). The multitouch segment is currently exhibiting a compound annual growth rate of more than 18 percent, with the portable consumer sector driving the vast majority of this growth.
Moving forward, the problem for design engineers is knowing how to bring the multitouch capabilities that are already becoming commonplace insmartphonesand tablet PCs to other areas that could also derive benefit from them.Digital signage, Point-Of-Sale (POS), public information, and industrial control systems could profit greatly from this sort of functionality. However, certain obstacles are inhibiting the adoption of multitouch in these nonconsumer sectors.