One might think that news of the 2019 Honda Prize being awarded to Dr. Geoffrey Hinton “for his pioneering research in the field of deep learning in artificial intelligence (AI)” would prompt the machine learning community to toast the man they call the “Godfather of Deep Learning.” Instead, the gloves came off and what ensued was an unexpected Internet dust-up.
Jürgen Schmidhuber started it. In a blog post, the Scientific Director of The Swiss AI Lab IDSI called out the Honda Prize for crediting Hinton with inventing backpropagation, among other things. Schmidhuber argued that “Hinton has made significant contributions to artificial neural networks (NNs) and deep learning, but Honda credits him for fundamental inventions of others whom he did not cite.”
Schmidhuber identified what he said were “six false and/or misleading attributions of credit to Dr. Hinton” in the press release. “I’ll point out,” he wrote, “that Hinton’s most visible publications failed to mention essential relevant prior work – this may explain some of Honda’s misattributions.”
The 6,300 word document, Critique of Honda Prize for Dr. Hinton, was published on Tuesday on the The Swiss AI Lab IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) website. The opening line reads: “We must stop crediting the wrong people for inventions made by others.”
Today, Hinton, University Professor Emeritus at the University of Toronto, responded on Reddit, “I have never claimed that I invented backpropagation. David Rumelhart invented it independently long after people in other fields had invented it. It is true that when we first published we did not know the history so there were previous inventors that we failed to cite. What I have claimed is that I was the person to clearly demonstrate that backpropagation could learn interesting internal representations and that this is what made it popular.”
There is a history between the two respected researchers. Back to 2015, shortly after Hinton, Yann LeCun and Yoshua Bengio published Deep Learning on Nature, Schmidhuber wrote on his blog, “Machine learning is the science of credit assignment. The machine learning community itself profits from proper credit assignment to its members. The inventor of an important method should get credit for inventing it. She may not always be the one who popularizes it. Then the popularizer should get credit for popularizing it (but not for inventing it)… If you “re-invent” something that was already known, and only later become aware of this, you must at least make it clear later.”
Hinton wrote that the Reddit post would be his last word on the topic, and wrapped it up with the admission that “I’ve seen things in the press that say that I invented backpropagation, and that is completely wrong. It’s one of these rare cases where an academic feels he has got too much credit for something!”
The feud has been picked up by the machine learning community and the conversations continue on social media. Google DeepMind Research Scientist Oriol Vinyals tweeting a hopeful message: “It would be great to credit ideas instead of people. Science should be unbiased & anonymous.”
The basics of backpropagation were derived in the context of control theory and chain rule by multiple researchers in the early 1960s and implemented to run on computers as early as 1970. The term and its general use in neural networks were proposed in the 1986 Nature paper Learning Representations by Back-propagating Errors, co-authored by David Rumelhart, Hinton, and Ronald Williams.
Journalist: Fangyu Cai & Yuan Yuan | Editor: Michael Sarazen