The deployment of machine learning (ML) applications on edge devices, aka TinyML, can greatly benefit AI systems in smart home appliances, virtual assistants, autonomous vehicles and smart surveillance. While reducing latency, bandwidth and power consumption, TinyML can also improve data privacy; and is crucial for the continuing development of interconnected systems in the industrial Internet of things (IIoT). Despite its attractiveness, the large-scale adoption of TinyML faces a number of hurdles.
In the new paper TinyMLOps: Operational Challenges for Widespread Edge AI Adoption, a research team from Hotg.ai and Ghent University explores the current challenges facing TinyML and techniques designed to reduce the compute, memory, and energy costs of ML models, providing insights for the efficient large-scale deployment of edge AI.
The last decade has witnessed an explosion in the development and application of ML techniques, giving rise to the field of MLOps, a set of best practices for the efficient and reliable deployment of production ML models with regard to automation, monitoring, integration, testing, etc.
While MLOps was developed for centralized, cloud-based applications, TinyMLOps focuses on decentralized, edge-based applications, i.e. AI models deployed on end users’ devices. As different users have different devices with different computational resources, storage availability and network connectivity, TinyMLOps would ideally push a smaller, more efficient model to edge devices with limited resources; and a large, more accurate model to more powerful devices.
The team identifies several research avenues that TinyMLOps developers and practitioners should consider with regard to edge devices. For instance, special hardware support may be required to obtain increased throughput or reduced energy consumption, systems should enable users to configure pipelines and provide observable solutions to monitor the distribution of input values and detect data drift, and the use of robust federated learning techniques to protect data privacy.
The researchers also provide insights on how to address more specific challenges, such as dealing with a fragmented device landscape, protecting the intellectual property of an ML model, and validating the results of a given model.
TinyML remains a nascent research field, with most of the tools and frameworks in their early stages. The team hopes their comprehensive exploration can help boost its progress and encourage and guide the development of new TinyMLOps platforms that will make TinyML accessible to developers and scalable to billions of edge devices.
The paper TinyMLOps: Operational Challenges for Widespread Edge AI Adoption is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
Pingback: Toward Large-Scale Edge AI Adoption: Hotg.ai & UGent Publish a Comprehensive Review of TinyMLOps Challenges | June 2023 | Artificial Intelligence Journal