Tag: meta learning

AI Machine Learning & Data Science Research

Meta AI & UPF’s Toolformer: Enabling Language Models to Teach Themselves to Use External Tools

In the new paper Toolformer: Language Models Can Teach Themselves to Use Tools, a team from Meta AI Research and the Universitat Pompeu Fabra proposes Toolformer, a model that self-learns how to choose and use external tools such as search engines, calculators, and translation systems to boost performance on downstream tasks.

AI Machine Learning & Data Science Research

DeepMind’s Meta-Learning Sparse Compression Networks Set New SOTA on Diverse Modality Data Compression

In the new paper Meta-Learning Sparse Compression Networks, a DeepMind research team proposes steps for scaling implicit neural representations (INRs). The resulting meta-learning sparse compression networks can represent diverse data modalities such as images, manifolds, signed distance functions, 3D shapes, and scenes, achieving state-of-the-art results on some of them.

AI Machine Learning & Data Science Nature Language Tech Research

Introducing MetaICL: A Language Model Meta-Training Framework for Few-Shot In-Context Learning

A research team from the University of Washington, Facebook AI Research and the Allen Institute for AI introduces Meta-training for InContext Learning (MetaICL), a new meta-training framework for few-shot learning where an LM is meta-trained to learn in-context — conditioning on training examples to recover the task and make predictions.

AI Machine Learning & Data Science Research

DeepMind & IDSIA Introduce Symmetries to Black-Box MetaRL to Improve Its Generalization Ability

In the paper Introducing Symmetries to Black Box Meta Reinforcement Learning, a research team from DeepMind and The Swiss AI Lab IDSIA explores the role of symmetries in meta generalization and shows that introducing more symmetries to black-box meta-learners can improve their ability to generalize to unseen action and observation spaces, tasks, and environments.