You may not realize it but most of the smart electronics you use every day — from IoT devices to smartphones to assisted driving systems — were built on architecture designed by Arm, a leading UK-based intellectual property (IP) provider.
Arm’s booth at CES 2018 showcased a wide range of demos and products equipped with Arm processors, including smart speakers Google Home and Amazon Echo, smart city solutions in lighting management and smart parking, a smart camera from Hive, and autonomous driving applications Cockpit Controller and Event Data Recorder.
Jem Davies, General Manager of the Arm Machine Learning Group, sat down with Synced to outline his company’s ambitious roadmap for machine learning development.
Davies efforts for Arm in machine learning can be traced back to 2013, when Arm tasked him with examining the AI marketplace and making appropriate acquisitions. He soon came to believe there was no market segment that wasn’t already or about to be impacted by the tech. “AI affects everything…mobile phones…cameras…the little smart speaker… even thermostats. Who thought of a room thermostat as a smart device?” says Davies.
In 2016 Davies led the acquisition of Apical, a company that provides computer vision and imaging processors for over 1.5 billion devices. The acquisition bootstrapped Arm’s entry into machine learning, enabling the company’s object detection technology on one hand, while developing IP that extends into neural network processing. In March 2017 Arm launched its Mali-C71 image signal processor (ISP), a product series designed specifically for the Advanced Driver Assistance Systems (ADAS) inside vehicles.
Last May the company took a huge step forward in AI and introduced DynamIQ technology. Built on Arm’s big.LITTLE technology which accommodates both powerful and relatively small processors in one chip, DynamIQ improves the flexibility and efficiency of multi-core processing designs, and enables more processors that better perform AI tasks to compute on a single chip.
Says Davies, “Based on DynamIQ, Cortex-A75, Cortex-A55, and Mali-G72 (Arm’s latest iteration of CPU and GPU) are specifically targeting machine learning workloads. So we’ve been analyzing the sort of code that people are writing and working out what best to do to execute those workloads more efficiently.”
While most machine learning systems still run on CPUs and GPUs, Davies says Arm is interested in developing special purpose processors for AI acceleration. Based on the company’s history, Arm will likely seek an acquisition as its strategy for entering the AI processor market.
The company meanwhile is also stepping up its software development and optimization efforts, which is a particular focus of Davies’ new Machine Learning Group. Says Davies, “The difference between an optimized implementation software and a naive implementation software could effect a 10x improvement.”
Arm has also introduced an open-source library to provide optimized routines for accelerating machine learning frameworks such as TensorFlow, MXNet and Caffe. The functions are optimized for Arm Cortex CPU and Mali GPU processors, and target a variety of use cases, including* *image processing, computer vision and machine learning.
For years Arm has quietly led the way in AI, so what to expect in 2018? Davies did not disclose details, but promised “there will be more announcements that will be incredibly exciting.”
Journalist: Tony Peng | Editor: Michael Sarazen
0 comments on “Arm Accelerates its AI Efforts”