AI Technology

New Hinton Nature Paper Revisits Backpropagation, Offers Insights for Understanding Learning in the Cortex

Turing awardee and backpropagation pioneer Geoffrey Hinton recently co-authored a paper that takes a look back at backpropagation and explores its potential to contribute to understanding how the human cortex learns.

Although Turing awardee and backpropagation pioneer Geoffrey Hinton’s interests have largely shifted to unsupervised learning, he recently co-authored a paper that takes a look back at backpropagation and explores its potential to contribute to understanding how the human cortex learns.

Hinton and a team of researchers from DeepMind, University College London, and University of Oxford published the paper last Friday on Nature Reviews Neuroscience. Their main idea is that biological brains could compute effective synaptic updates by using feedback connections to induce neuron activities whose locally computed differences encode backpropagation-like error signals.

Backpropagation of errors, or backprop, is a widely used algorithm in training artificial neural networks using gradient descent for supervised learning. The basics of continuous backpropagation were proposed in the 1960s, and in 1986 a Nature paper co-authored by Hinton showed experimentally that backprop can generate useful internal representations for neural networks.

ts-422.png
A spectrum of learning algorithms

The introduction of backpropagation also generated excitement in the neuroscience community, where it was viewed as a possible source of insight on understanding the learning process in the cortex. How the cortex modifies synapses to improve the performance of multistage networks remains one of the biggest mysteries in neuroscience.

Although we know that human brains learn by modifying the synaptic connections between neurons, synapses in the cortex are embedded within multi-layered networks, making it difficult to determine the effect of an individual synaptic modification on the behaviour of the system. In artificial neural networks, backprop tries to solve this problem by computing how slight changes in each synapse’s strength change the network’s error rate using the chain rule of calculus.

The relevance of backpropagation to the cortex however had been in doubt for some time. The method was viewed as biologically problematic, as it was classically described in the supervised learning setting while the brain is thought to learn mainly in an unsupervised fashion and appears to use its feedback connections for different purposes. Moreover, decades after it was first proposed, backpropagation had still failed to produce truly impressive performance in artificial systems.

Backprop made its comeback in the 2010s, contributing to the rapid progress in unsupervised learning problems such as image and speech generation, language modelling, and other prediction tasks. Combining backprop with reinforcement learning also enabled significant advances in solving control problems such as mastering Atari games and beating top human professionals in games like Go and poker.

The successes of artificial neural networks over the past decade along with developments in neuroscience have reinvigorated interest in whether backpropagation can offer insights for understanding learning in the cortex. The new paper proposes that the brain has the capacity to implement the core principles underlying backprop, despite the apparent differences between brains and artificial neural nets.

The researchers introduced neural gradient representation by activity differences (NGRAD), which they define as learning mechanisms that use differences in activity states to drive synaptic changes.

To function in neural circuits, NGRADs need to be able to coordinate interactions between feedforward and feedback pathways, compute differences between patterns of neural activities, and use these differences to make appropriate synaptic updates. Although it is not yet clear how biological circuits could support these operations, the researchers say that recent empirical studies present an expanding set of potential solutions to these implementation requirements.

ts-4.22.png
Empirical findings suggest new ideas for how backprop-like learning might be approximated by the brain.

The NGRAD framework demonstrates that it is possible to embrace the core principles of backpropagation while sidestepping many of its problematic implementation requirements. And although the researchers focused on the cortex because many of its architectural features resemble that of deep networks, they believe NGRADs may be relevant to any brain circuit that incorporates both feedforward and feedback connectivity.

Many pieces are still missing that would firmly connect backprop with learning in the brain. Nonetheless, the situation now is very much reversed from decades ago, when neuroscience was thought to have little to learn from backprop. Now, the researchers believe, learning by following the gradient of a performance measure can work very well in deep neural networks: “It therefore seems likely that a slow evolution of the thousands of genes that control the brain would favour getting as close as possible to computing the gradients that are needed for efficient learning of the trillions of synapses it contains.”

The paper Backpropagation and the Brain is available on Nature Reviews Neuroscience. The first author is Timothy P. Lillicrap, and the research team also includes Adam Santoro, Luke Marris and Colin J. Akerman.


Journalist: Yuan Yuan | Editor: Michael Sarazen

1 comment on “New Hinton Nature Paper Revisits Backpropagation, Offers Insights for Understanding Learning in the Cortex

  1. Pingback: New Hinton Nature Paper Revisits Backpropagation, Offers Insights for Understanding Learning in the Cortex – Knowledge Base 4 All – A Business Plan

Leave a Reply

Your email address will not be published.

%d bloggers like this: