AI Machine Learning & Data Science Popular Research

DeepMind’s Bootstrapped Meta-Learning Enables Meta Learners to Teach Themselves

A research team from DeepMind proposes a bootstrapped meta-learning algorithm that overcomes the meta-optimization problem and myopic meta objectives, and enables the meta-learner to teach itself.

Learning how to learn is something most humans do well, by leveraging previous experiences to inform the learning processes for new tasks. Endowing AI systems with such abilities however remains challenging, as it requires the machine learners to learn update rules, which typically have been manually tuned for each task.

The field of meta-learning studies how to enable machine learners to learn how to learn, and is a critical research area for improving the efficiency of AI agents. One of the approaches is for learners to learn an update rule by applying it on previous steps and then evaluating the corresponding performance.

To fully unlock the potential of meta-learning, it is necessary to overcome both the meta-optimization problem and myopic meta objectives. To tackle these issues, a research team from DeepMind has proposed an algorithm designed to enable meta-learners to teach themselves.

For a meta-learner to learn its update rules, it first needs to evaluate these update rules. This requires applying them before the evaluation process — which can lead to prohibitively high computation costs.

Previous studies have assumed that optimizing performance after some (K) applications of the update rule will yield improved performance for the remainder of the learner’s lifetime. However, if this assumption fails, meta-learners will suffer from a short-horizon bias. Furthermore, optimizing the learner’s performance after K updates can also fail to account for the learning process itself.

Such a meta-optimization process also creates two bottlenecks: 1) Curvature: the meta-objective is constrained to the same type of geometry as the learner; 2) Myopia: the meta-objective is fundamentally limited to evaluating performance within the K-step horizon, but ignores future learning dynamics.

The proposed algorithm includes two main features to overcome these issues. Firstly, to mitigate myopia, it leverages bootstrapping methods to infuse information about learning dynamics into the objective. Secondly, the meta-objective is formulated in terms of minimizing the distance to the bootstrapped target to control curvature. The general idea behind the proposed algorithm is thus that a meta-learner can effectively learn to learn from itself by matching future desired updates with fewer steps.

The researchers explain that their proposed algorithm constructs the meta-objective in two steps:

  1. It bootstraps a target from the learner’s new parameters. In this paper, we generate targets by continuing to update the learner’s parameters — either under the meta-learned update rule or another update rule — for some number of steps.
  2. The learner’s new parameters — which are a function of the meta-learner’s parameters — and the target are projected onto a matching space. A simple example is Euclidean parameter space. To control curvature, we may choose a different (pseudo-)metric space. For instance, a common choice under probabilistic models is the Kullback-Leibler (KL) divergence.

Overall, the meta-learner’s objective is to minimize the distance to the bootstrapped target. To this end, the team applies a novel Bootstrapped Meta-Gradient (BMG) to infuse information of future learning dynamics without increasing the update steps to backpropagate through. The BMG can thus speedup the optimization process and, as the paper demonstrates, guarantee performance improvements.

The team conducted extensive experiments to test the performance of BMG over standard meta-gradients. These were performed using a typical reinforcement learning Markov decision process (MDP) task: learning a policy that maximizes the value given an expectation.

In the evaluations, BMG demonstrated substantial performance improvements on the Atari ALE benchmark, achieving a new state-of-the-art. BMG also improved on model-agnostic meta-learning (MAML) in the few-shot setting, indicating the study’s potential to open up new possibilities for efficient meta-learning exploration.

The paper Bootstrapped Meta-Learning is on arXiv.


Author: Hecate He | Editor: Michael Sarazen, Chain Zhang


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

31 comments on “DeepMind’s Bootstrapped Meta-Learning Enables Meta Learners to Teach Themselves

  1. Pingback: r/artificial - [R] DeepMind’s Bootstrapped Meta-Learning Enables Meta Learners to Teach Themselves - Cyber Bharat

  2. Pingback: A research team from DeepMind proposes a bootstrapped meta-learning algorithm that overcomes the meta-optimization problem and myopic meta objectives, and enables the meta-learner to teach itself. – Technology Tube

  3. Pingback: DeepMind's Bootstrapped Meta-Learning Enables Meta Learners to Teach Themselves – Synced - AI Caosuo

  4. Hi thanks for the helpful information shared. the article is really very helpful

  5. Very nice post. I definitely love this site. Keep it up!
    click here: visit us

  6. Well I really liked reading it. This information provided by you is very constructive
    http://virtuelcampus.univ-msila.dz/facshs

  7. Very nice post. I definitely love this site. Keep it up!
    click here: visit us

  8. Well I really liked reading it. This information provided by you is very constructive
    http://virtuelcampus.univ-msila.dz/facshs

  9. nice . thanks for sharinghttp://virtuelcampus.univ-msila.dz/facshs/‎

  10. Pingback: DeepMind’s Bootstrapped Meta-Learning Enables Meta Learners To Teach Themselves - AI Summary

  11. gooooooooooooooooood

  12. very gooooooooooooooooooooood

  13. Very interesting!

  14. Fantastic web site. Plenty of helpful information here.

  15. Lots of useful information about the meta optimization process. Thanks!

  16. This is perfect, thanks ./*/
    <

  17. This is perfect, thanks
    ..

  18. This is perfect

  19. Hi, thank you for sharing this amazing post! I am an editor from InfoQ China and we would like to translate your article to Chinese and post it on our website. We would really appreciate it if you could grant us permission to do so. We would of course provide the proper citation as well as a link back to your website. Thanks again for your time! -Alex

  20. merci pour les informations

  21. Merci pour ce partage et ce travail.

  22. Thank you so much! That was really helpful. I learnt a lot from it. Feel free to read

  23. Thank you so much! That was really helpful.

  24. very nice post. I definitely love this site. Keep it up!

  25. This information is really awesome thanks for sharing most valuable information.

  26. Thank you very much for giving space to us to express our feeling and thoughts about above information.
    I think you will keep updating and changing these information time to time if there is need to change

  27. Thanks for such a fantastic article that I’ve read ;Thanks so much
    GTU

  28. Thanks for such a fantastic article that I’ve read ;Thanks so much
    GTU

  29. Authentic article with lots of beneficial information. Loved to read it. Will visit again for more updates.

    http://virtuelcampus.univ-msila.dz/factech

Leave a Reply to amel snv Cancel reply

Your email address will not be published. Required fields are marked *