ARM has unveiled its next generation of processor architecture named DynamIQ. Chips built using DynamIQ, ARM says, will allow manufacturers to build more powerful systems on its chips that are tailored for computation-heavy tasks from Artificial Intelligence to Self-Driving Cars.
“It’ll be in smartphones and tablets, for sure, but also automotive networking and a whole range of other embedded devices”, ARM product marketing head John Ronco told The Verge.
ARM’s current architecture “big.LITTLE” basically pairs a group of powerful processors (big) with a bunch of power-efficient (little) ones. One of the benefits of this architecture is that it adapts to user’s needs. For example, if your mobile is on a standby the smaller processors are called upon to keep things running. If you launch an app or a game, the big processors come into play.
DynamIQ goes a steps further by supporting cores that are not just big or small, but anywhere in between. This will allow manufacturers to provide more flexible and powerful chipsets at lower price points.
It will also allow chip makers to optimize their silicon for machine learning. They can also build AI accelerators directly into chips, enabling systems to efficiently manage data and memory. This would lead to machine-learning powered software such as Huawei OS which identifies the frequently used apps for a user and allocates processing accordingly.
By combining these accelerators (hardware) with its processor instructions (software), ARM claims that DynamIQ will deliver a 50 times increase in AI-related performance in 3-5 years.
Ronco says that ARM has already licensed the new architecture to a number of customers, and expects to see the first DynamIQ-powered devices hit the market by 2018.