AI

Gary Marcus’ Deep Learning Critique Triggers Backlash

It took a mere 72 hours for deep learning researchers to ignite the first AI Twitter debate of 2018.

It took a mere 72 hours for deep learning researchers to ignite the first AI Twitter debate of 2018.

1_Yq-JFGqOEwccQMLNCy90cg.png

On January 2, NYU Professor and Founder of Uber-owned machine learning startup Geometric Intelligence Gary Marcus published the paper Deep Learning: A Critical Appraisal on ArXiv. The paper shortlisted problems that are keeping current research from actualizing artificial general intelligence.

Marcus’ central argument was that present deep learning systems have failed in inference beyond specific datasets they have seen. He listed ten challenges facing deep learning research, such as data hungriness, lack of transparency, inability to extrapolate, and difficult to engineer.

Marcus said his greatest fear is that AI will get pigeonholed as a “local minimum, focusing too much on the detailed exploration of a particular class of accessible but limited models,” and forget its mission to march towards artificial general intelligence. He then raised possibilities for a future beyond deep learning: focus more on unsupervised learning; symbol-manipulation (aka GOFAI, the Good Old-Fashioned AI); derive insights from cognitive and developmental psychology; or focus more on acquiring common sense knowledge, scientific reasoning, game playing, etc.

A day later, former AAAI Co-chair and NIPS Chair Thomas G. Dietterich countered Gary Marcus’ article with no less than 10 tweets, calling it a “disappointing article… DL learns representations as well as mappings. Deep machine translation reads the source sentence, represents it in memory, then generates the output sentence. It works better than anything GOFAI ever produced.”

Dietterich added that “DL is essentially a new style of programming — differentiable programming — and the field is trying to work out the reusable constructs in this style. We have some: convolution, pooling, LSTM, GAN, VAE, memory units, routing units, etc.”

Long-time deep learning advocate and Facebook Director of AI Research Yann LeCun backed Dietterich’s counter-arguments: “Tom is exactly right.” In a response to MIT Tech Review Editor Jason Pontin and Gary Marcus, LeCun testily suggested that the later might have mixed up “deep learning” and “supervised learning,” and said Marcus’ valuable recommendations totalled “exactly zero.

1__QI_DCK_kZDiVoVAG61qBw.png

1_W5WhToR_WP4I74Lo4uJ-1Q.png

Marcus and LeCun have a history, they squared off in a New York University debate last October. Marcus is an advocate for deep learning integrating with human cognitive sciences, whereas LeCun is not thrilled by that possibility. You can watch the NYU debate here: https://www.youtube.com/watch?v=aCCotxqxFsk

Some Reddit users argued that Marcus had ignored technical details and recent advancements such as GANs, zero-shot and few-shot deep learning methods. Redditor Gwren commented, “If anything, I came out more convinced DL is the future, if that is the best the critics can do…”

So is deep learning hitting a wall as argued in Marcus’ paper? Researchers have raised questions: last year Turing Award winner and University of California Los Angeles Professor Judea Pearl’s paper *Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution* argued that human-level AI cannot emerge from model-blind learning machines that ignore causal relationships. He brought the discussion to NIPS in December.

At an AI conference in Montreal last October the deep learning trio of Geoff Hinton, Yann LeCun, and Yoshua Bengio agreed that deep learning research is no longer on the fast track. And this raises the questions: have we hit a wall? Opinions bifurcate, but as we roll into 2018 one thing appears certain: this won’t be the last AI Twitter debate of the year.


Journalist: Meghan Han | Editor: Michael Sarazen

4 comments on “Gary Marcus’ Deep Learning Critique Triggers Backlash

  1. think Marcus has a great critique and it does appear that “at the moment” AI research while very impressive does not have any plausible roadmap to AGI. most people in the field and even outside it, similar fields (eg psychology/ neurobiology etc) will admit there is “still” no general theory of AGI. my big zen question, would anyone actually recognize a/ the real theory/ principle of AGI if they saw it? if so, can anyone refute this as exactly that? so far no takers whatsoever.

    https://vzn1.wordpress.com/2018/01/04/secret-blueprint-path-to-agi-novelty-detection-seeking/

  2. Pingback: AI 2017 part 2 highlights/ trends | Turing Machine

  3. Anonymous

    Notify me of new posts via email

  4. Pingback: top AGI leads 2018½ | Turing Machine

Leave a Reply

Your email address will not be published. Required fields are marked *

%d