Ninety percent of AI-enabled devices shipped today are based on architecture developed by Arm, a leading UK-based chip intellectual property (IP) provider known for its CPU and GPU processors. In order to scale the impact of machine learning the company today announced Project Trillium, an Arm IP suite that includes a machine learning processor, a object-detection processor, and a library of neural network software.
Project Trillium is the company’s latest ambitious move in artificial intelligence, a ground-up design to improve the performance and efficiency of AI-enabled devices, which are expected increase in number from 300 million today to 3.2 billion by 2028.
Arm’s efforts in machine learning can be traced back in 2013 when it began exploring the AI marketplace and made a number of strategic acquisitions. In 2017 the company launched its new Machine Learning Group and named Jem Davies as General Manager. In an exclusive interview, Davies told Synced that in his mind there was “no market segment that wasn’t already or about to be impacted by AI.”
“AI affects everything…mobile phones…cameras…the little smart speaker… even thermostats. Who thought of a room thermostat as a smart device?” said Davies.
The machine learning processor introduced today is Arm’s first-generation AI chip targeting the inferencing of mobile devices. The chip delivers no less than 4.6 trillion operations per second (TOPS) of mobile performance per mm2, with a further uplift of 2x-4x in effective throughput in real-world uses of optimization, and an efficiency of over three TOPS per watt (TOPs/W) in thermal and cost-constrained environments.
Davies says the architecture behind their machine learning processors is completely new, and results from many years of research. The architecture is optimized around 16 integer arithmetic.
The new architecture will provide a great solution for challenges that CPUs and GPUs struggled with, says Davies. “Convolutional Neural Networks are very common. One of the things is that a traditional architecture, whether CPU, GPU or DSP, is going to involve a lot of intermediate result storing and loading from memory. So we have produced a completely new architecture with an intelligent memory system.”
The object detection processor is an iteration based on Arm’s existing IP family: Spirit, the object detection accelerator that powers the Hive security camera. It was released in 2016 soon after Arm acquired Apical, a company that provides computer vision and imaging processors for over 1.5 billion devices.
Arm’s second-generation processor can detect virtually unlimited numbers of objects in real time with Full HD at 60fps. Its detailed people model provides rich metadata and enables detection of direction, trajectory, pose and gesture.
Arm provides an integrated solution comprising machine learning processors and object detection processors. In real-time object recognition tasks, the object detection processor will first isolate areas of interest such as faces. The machine learning processor will then be able to analyze fewer pixels for a faster, fine-grain result.
Arm’s neural network software library is a collection of building blocks for imaging, vision and machine learning workloads. Developers can use the software with Arm’s existing implementation tools such as Compute Library to accelerate their algorithms and applications, or CMSIS-NN to maximize performance at the edge. The library supports mainstream frameworks such as TensorFlow and Caffe, and is optimized for Arm Cortex CPU, Mali GPU, and new machine learning processors.
Arm machine learning processors will be delivered to partners this summer, and object detection processors will be available by the end of this quarter.
Journalist: Tony Peng | Editor: Michael Sarazen
The growing number of AI applications, improving computer power and declining hardware cost is pushing the sales of the machine learning processors.