In the new paper p-Meta: Towards On-device Deep Model Adaptation, a research team from ETH Zurich, Singapore Management University and Beihang University proposes p-Meta, a novel meta-learning method for data- and memory-efficient on-device adaption of deep neural networks for IoT applications.
In the new paper Meta-Learning Sparse Compression Networks, a DeepMind research team proposes steps for scaling implicit neural representations (INRs). The resulting meta-learning sparse compression networks can represent diverse data modalities such as images, manifolds, signed distance functions, 3D shapes, and scenes, achieving state-of-the-art results on some of them.
A research team from Mila, Québec Artificial Intelligence Institute, Université de Montréal, CIFAR and IVADO Labs challenges the assumption that task diversity will improve model performance in meta-learning, finding instead that repeating the same tasks over the training phase can achieve performance similar to models trained on uniform sampling.
A research team from the University of Washington, Facebook AI Research and the Allen Institute for AI introduces Meta-training for InContext Learning (MetaICL), a new meta-training framework for few-shot learning where an LM is meta-trained to learn in-context — conditioning on training examples to recover the task and make predictions.
In the paper Introducing Symmetries to Black Box Meta Reinforcement Learning, a research team from DeepMind and The Swiss AI Lab IDSIA explores the role of symmetries in meta generalization and shows that introducing more symmetries to black-box meta-learners can improve their ability to generalize to unseen action and observation spaces, tasks, and environments.