The computational power of smartphones and tablets has skyrocketed to the point where they approach the level of desktop computers on the market not long ago. Although it’s easy for mobile devices to run all the standard smartphone apps, today’s artificial intelligence algorithms can be too compute-heavy for even high-end devices to handle.
New research from ETH Zurich University examines the current state of Deep Learning (DL) on Android platforms, ranks existing frameworks and programming models, and identifies the limitations for running AI on smartphones.
Researchers studied acceleration resources on four main mobile chipset platforms: Qualcomm, HiSilicon, MediaTek and Samsung, while also comparing real-world performance results of various SoCs. AI Benchmark collected results covering all main existing hardware configurations.
In a series of tests the Huawei P20 Pro tripled the next-best phone’s “AI-Score.” OnePlus, HTC and Samsung were all in the top five. The P20 Pro had the tremendous advantage of being the first device equipped with the state-of-the-art Kirin 970 SoC, which was specifically designed for AI applications.
To determine whether a particular smartphone is powerful and fast enough to run the latest Deep Neural Networks to perform AI-based tasks, researchers conducted nine key AI tests:
- Object Recognition/ Classification of MobileNet- V1 neural network
- Object Recognition/ Classification of Inception – V3 neural network
- Face Recognition
- Image Deblurring
- Image Super-Resolution of VGG – 19 neural network on CPU, NPU, DSP
- Image Super-Resolution of SRGAN neural network on CPU only
- Semantic Image Segmentation
- Photo Enhancement
- Memory Limits
The tests can be divided into two sets. In the first (tests 1,2,4,5,8,9), researchers used a CNN model wholly supported by the Android Neural Networks API (NNAPI), so the tests could run with hardware acceleration on mobile devices with suitable chipsets and drivers. As an intermediate layer, the NNAPI handles communications between the higher-level machine learning framework and the device’s hardware acceleration resources.
Furthermore, using the NNAPI avoided problematic scenarios such as the system failing to automatically detect the AI accelerators and instead performing the computations on CPU.
The second set of tests (3, 6, 7) involved neural networks running entirely on CPUs. Such tests were used to examine the speed of CPU-based performance. Additionally, in cases in the first set of tests where the NNAPI drivers were missing, computations reverted to CPUs using this instruction set.
The AI Benchmark researchers remained neutral regarding future directions for hardware acceleration for AI algorithms on Android devices. They believe the situation will become clearer in early 2019 when the first smartphones equipped with the powerful new Kirin 980, MediaTek P80, and Qualcomm and Samsung Exynos premium SoCs go to market.
The ETH Zurich paper AI Benchmark: Running Deep Neural Networks on Android Smartphones is on arXiv. The AI Benchmark project team will update their research results and real-world tests monthly at http://ai-benchmark.com.
Journalist: Fangyu Cai | Editor: Michael Sarazen