Meta-learning is a machine learning approach for developing algorithms and models that learn how to learn from their own experiences. While recent meta-learned models have shown they can capture and reproduce humanlike reasoning, decision-making and language skills, the broader field of meta-learned models of cognition remains relatively underexplored.
In the new paper Meta-Learned Models of Cognition, a team from the Max Planck Institute for Biological Cybernetics (Max-Planck-Gesellschaft, MPG) and DeepMind proposes the establishment of a research program focused on meta-learned models of cognition. The team cites machine learning papers demonstrating how meta-learning can be used to construct Bayes-optimal learning algorithms and suggests it can significantly expand the scope of the rational analysis of cognition.
The team summarizes the benefits of meta-learning models over Bayesian inference as follows:
- Meta-learning can produce approximately optimal learning algorithms even if exact Bayesian inference is computationally intractable.
- Meta-learning can produce approximately optimal learning algorithms even if it is not possible to phrase the corresponding inference problem in the first place.
- Meta-learning makes it easy to manipulate a learning algorithm’s complexity and can therefore be used to construct resource-rational models of learning.
- Meta-learning allows us to integrate neuroscientific insights into the rational analysis of cognition by incorporating these insights into model architectures.
The paper explores prior psychology and neuroscience studies that have applied meta-learning approaches and demonstrates how neuroscientific insights can be integrated into the rational analysis of cognition. The researchers also observe that the flexibility of neural network frameworks can enable the engineering of humanlike inductive biases into meta-learned models of cognition.
This ambitious study highlights meta-learning models’ connection to Bayesian inference and the rational analysis of cognition. The team hopes their work will resonate with a wide audience and create opportunities for constructing models of human cognition that remain out of reach using current approaches.
The paper Meta-Learned Models of Cognition is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
Hi Amazing post