AI Machine Learning & Data Science Research

‘P-Meta’ Learning Approach Boosts Data and Memory Efficiency for On-Device DNN Adaptation in the IoT

In the new paper p-Meta: Towards On-device Deep Model Adaptation, a research team from ETH Zurich, Singapore Management University and Beihang University proposes p-Meta, a novel meta-learning method for data- and memory-efficient on-device adaption of deep neural networks for IoT applications.

Deep learning-powered applications are playing an increasing role in the massive network of interconnected smart devices known as the Internet of Things (IoT). Current gradient-based meta-learning approaches however struggle with the data and memory constraints of these devices, making it challenging to deliver advanced customized services and consistent performance.

A research team from ETH Zurich, Singapore Management University and Beihang University addresses this problem in the new paper p-Meta: Towards On-device Deep Model Adaptation, proposing p-Meta, a novel meta-learning method for data- and memory-efficient on-device adaption of deep neural networks (DNN) for IoT applications.

The team summarizes their main contributions as:

  1. We design p-Meta, a new meta-learning method for data and memory-efficient DNN adaptation to unseen tasks. P-Meta automatically identifies adaptation-critical weights both layer-wise and channel-wise for low-memory adaptation.
  2. Evaluations on few-shot image classification and reinforcement learning show that p-Meta not only improves accuracy but also reduces peak dynamic memory by a factor of 2.5 on average over the state-of-the-art few-shot adaptation methods. P-Meta can also simultaneously reduce the computation by a factor of 1.7 on average.

The researchers set out to build a DNN for IoT applications that could deliver consistently good performance and enable fast adaption to unseen environments, users, and tasks. Effective on-device adaption of DNNs requires both data and memory efficiency, which the proposed p-Meta achieves by enforcing structured partial parameter updates. This approach is inspired by recent advances in understanding gradient-based meta-learning and the realization that model weights are not contributed equally when generalizing to unseen tasks. P-Meta is thus designed to automatically identify adaptation-critical weights to minimize the memory cost in few-shot learning.

In their empirical studies, the team evaluated the proposed p-Meta against baseline methods (MAML, ANIL, MAML++, etc.) on standard few-shot image classification tasks. The results show that p-Meta yields the best performance in most scenarios, while reducing peak memory use by up to 3.4× and computation by up to 2.6× compared to MAML++.

Overall, p-Meta demonstrates promising potential for efficient on-device DNN model adaptation for IoT applications, which the team regards as an important early step toward fully adaptive and autonomous edge intelligence applications.

The paper p-Meta: Towards On-device Deep Model Adaptation is on arXiv.


Author: Hecate He | Editor: Michael Sarazen


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

1 comment on “‘P-Meta’ Learning Approach Boosts Data and Memory Efficiency for On-Device DNN Adaptation in the IoT

  1. There is little information on this subject on the Internet. When I was doing my term paper, I asked studyessay.org for help .Thanks to that, I got an excellent grade and didn’t have to sit for hours in the library looking for the right literature.

Leave a Reply

Your email address will not be published.

%d bloggers like this: