Tag: Gradient Descent

AI Machine Learning & Data Science Research

DeepMind Explores the Connection Between Gradient-Based Meta-Learning and Convex Optimization

In the new paper Optimistic Meta-Gradients, a DeepMind research team explores the connection between gradient-based meta-learning and convex optimization, demonstrating that optimism in meta-learning is achievable via the Bootstrapped Meta-Gradients approach.

AI Machine Learning & Data Science Research

NeurIPS 2022 | MIT & Meta Enable Gradient Descent Optimizers to Automatically Tune Their Own Hyperparameters

In the NeurIPS 2022 Outstanding Paper Gradient Descent: The Ultimate Optimizer, MIT CSAIL and Meta researchers present a novel technique that enables gradient descent optimizers such as SGD and Adam to tune their hyperparameters automatically. The method requires no manual differentiation and can be stacked recursively to many levels.

AI Machine Learning & Data Science Research

Google Brain & Radboud U ‘Dive Into Chaos’ to Show Gradients Are Not All You Need in Dynamical Systems

In the new paper Gradients Are Not All You Need, a Google Brain and Radboud University research team discusses a “particularly sinister” chaos-based failure mode that appears in a variety of differentiable circumstances, ranging from recurrent neural networks and numerical physics simulation to training learned optimizers.