AI Machine Learning & Data Science Popular Research

Geoffrey Hinton’s Forward-Forward Algorithm Charts a New Path for Neural Networks

Turing Award winner and deep learning pioneer Geoffrey Hinton, one of the original proponents of backpropagation, has argued in recent years that backpropagation does not explain how the brain works. In his NeurIPS 2022 keynote speech, Hinton proposes a new approach to neural network learning: the Forward-Forward algorithm.

Ten years ago, Geoffrey Hinton and his University of Toronto students published the paper ImageNet Classification with Deep Convolutional Neural Networks, presenting the first convolutional neural network to significantly surpass state-of-the-art results on the ImageNet database. The paper was recently honoured, in its first year of eligibility, with the NeurIPS 2022 Test of Time Award. In his NeurIPS keynote speech last week, Hinton offered his thoughts on the future of machine learning — focusing on what he has dubbed the “Forward-Forward” (FF) algorithm.

Deep neural networks that perform stochastic gradient descent with huge parameter counts and massive data have achieved stunning triumphs over the past decade. The gradients of such models are typically computed using backpropagation, a technique Hinton helped pioneer. But there is increasing interest in whether the biological brain follows backpropagation or, as Hinton asks, whether it has some other way of getting the gradients needed to adjust the weights on its connections. In this regard, Hinton proposes the FF algorithm as an alternative to backpropagation for neural network learning.

The FF algorithm is inspired by Boltzmann machines (Hinton and Sejnowski, 1986) and Noise Contrastive Estimation (Gutmann and Hyvärinen, 2010). It aims to replace the forward and backward passes of backpropagation with two forward passes: a positive pass that operates on real data and adjusts weights “to improve the goodness in every hidden layer,” and a negative pass that operates on externally supplied or model-generated “negative data” and adjusts weights to deteriorate the goodness. The objective function for each network layer is to have high goodness for positive data and low goodness for negative data.

Hinton posits that the FF algorithm can better explain the cortical learning process of the brain and emulate hardware with a lower energy consumption. He also advocates abandoning the hardware-software separation paradigm in computer science, suggesting that future computers be designed and built as “non-permanent” or “mortal” to save computational resources, and that the FF algorithm is the best-equipped learning method for such hardware.

In an empirical study, the FF algorithm achieved a 1.4 percent test error rate on the MNIST dataset without using complicated regularizers, demonstrating that it works as well as backpropagation. The FF approach also delivered results competitive with backpropagation on the CIFAR-10 dataset.

Hinton suggests the proposed FF algorithm combined with a mortal computing model could one day enable running trillion-parameter neural networks on only a few watts of power. Although he turned 75 this month, this ambitious new research shows that the Turing Award winner is not resting on his laurels.

The paper The Forward-Forward Algorithm: Some Preliminary Investigations is available on the University of Toronto website.


Author: Hecate He | Editor: Michael Sarazen, Chain Zhang


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

15 comments on “Geoffrey Hinton’s Forward-Forward Algorithm Charts a New Path for Neural Networks

  1. Pingback: Forward-Forward Algorithm, an Alternative to Backpropagation for Neural Networks - Smashapk

  2. Pingback: Tech roundup 172: a journal published by a bot - Javi López G.

  3. Wtf is “mortal”? At least due some attempt at explaining what it is and why is better (per Hintons) vs current approach to hw sw codesign

  4. gooooooooooood

  5. Great to read!

  6. This article is an appealing wealth of informative data that is interesting and well-written GTU

  7. Hello, I have browsed most of your posts. This post is probably where I got the most useful information for my research. Thanks for posting, maybe we can see more on this. Are you aware of any other websites on this subject
    http://virtuelcampus.univ-msila.dz/factech/

  8. شكرا

  9. In contrast to backpropagation, the FF algorithm only requires simple regularizers to reach a test error rate of 1.4% on the MNIST dataset, as shown by an empirical research.

  10. Ukraine is currently at war and thousands of innocent people are dying, who simply lived in their own country and did their usual things. Russia has declared war on Ukraine and is killing civilians. To read true information or help Ukraine go to the site Comeback Alive
    This fund provides assistance to the Ukrainian military and brings Ukraine’s victory over Russia closer.

  11. nice job guys your blog is so good

  12. really nice post
    thanks for that
    i like that

  13. Consistency is key: Strike a balance between staying top-of-mind and avoiding email fatigue by sending valuable content regularly without overwhelming your audience.

  14. this is interesting information to share it here

    GTU

  15. They’re all definitely some great sites!
    GTU
    Beautiful article, Thank you!

Leave a Reply to tita Cancel reply

Your email address will not be published. Required fields are marked *