Industry Research

Deep Learning Paper Sparks Online Feud!

A tweet from Google DeepMind's Nenad Tomasev, concerning a paper from the prestigious Montreal Institute for Learning Algorithms has sparked a well-known natural language processing expert Yoav Goldberg to tweet back: “...I really dislike this work.”

2045.jpg
Feature image is created by Jannoon028 – Freepik.com

Researchers Yoav Goldberg and Yann LeCun face off on Natural Language Processing

Social media is humanity’s new intellectual battlefield. Sports fans, social justice warriors, and even the President of the United States tweet, take to discussion boards or make memes to mock, preach, thrust and parry against their ideological opponents. The deep learning community, meanwhile — a staid and steady group focused on research, code and algorithms — might have been expected to be above such online bickering. That changed last weekend.

It all started with a tweet from Google DeepMind’s Nenad Tomasev, concerning a paper from the prestigious Montreal Institute for Learning Algorithms (MILA). Bar Ilan University Senior Lecturer and well-known natural language processing (NLP) expert Yoav Goldberg tweeted back: “…I really dislike this work.”

Goldberg then fired off a scathing 3,500-word missive on the blogging platform Medium, denouncing the paper for what he deemed “over-selling” and “flag-planting.” His argument was quickly countered in a Facebook post by Yann LeCun, the father of Convolutional Neural Networks (CNN), and the battle was on.

Screenshot.png

The paper at the centre of the storm, Adversarial Generation of Natural Language, proposed a way to generate natural language with Generative Adversarial Networks (GAN, the state-of-the-art artificial neural network used for unsupervised learning). Goldberg was wholly unconvinced.

Goldberg’s Medium post abruptly opened with “For f*cks sake, DL people, leave language alone and stop saying you solve it” and called on readers to not be swayed by “large claims pretending to solve [natural language processing], while actually doing tiny, insignificant, toy problems.” He cautioned the deep learning community to “position your work in the context of other works” and to “acknowledge the limitations of your work.”

The next day, Facebook Director of AI Research Yann LeCun responded to Goldberg’s post, saying his reaction did not help cross-field collaboration: “It takes time for communities to develop a common language and adopt the best of each other’s methodologies.” LeCun also defended arXiv for its efficiency in the development of scientific research. “The process that posting on arXiv gives us is simply much more efficient than the traditional model of publication.”

2.png

The argument continued through a third day, as Goldberg responded to LeCun with a softened tone, saying he appreciated Yann’s response and the interest and debate around his post. But he stuck to his position that the deep learning community had only “a very superficial understanding” of natural language processing, and that the paper was making “broad and unsubstantiated claims.”

“Sloppy papers with broad titles such as Adversarial Generation of Natural Language are harmful. It is exactly the difference between the patent system (which is overall a reasonable idea) and patent trolling (which is a harmful abuse),” wrote Goldberg.

Just as the Goldberg-LeCun tilt seemed to be fizzling out, it was re-kindled on social media, where it grew into a wide-ranging debate, with thousands of comments from the deep learning community flooding Twitter, Facebook and Medium regarding the paper, the response, and the role of its publisher, the non-peer reviewed scientific paper repository arXiv.

Quora’s VP Engineering Xavier Amatriain tweeted “I cannot support [Goldberg’s] rebuttal that is based on the premise ‘My problem is hard, you should show some respect.’” Meanwhile, François Chollet, Keras database creator and Google deep learning researcher, tweeted “People who post half-baked papers with misleading claims on arXiv to by-pass peer review & be ‘first’ do so because of poor incentives.”

The debate spread to Zhihu (China’s Quora), where a discussion titled “What do you think about Goldberg’s criticism of the MILA paper?” has already drawn over 70,000 views.

Why was Goldberg so miffed about the paper?

Goldberg argued the MILA paper that ignited the “explosive” debate had problems with attitude, method, and evaluation.

Goldberg took issue with the title, “Adversarial Generation of Natural Language.” This suggested the researchers had successfully generated language with GAN, which Goldberg flat-out rejected: “Call it what it really is: ‘A Slightly Better Trick for Adversarial Training of Short Discrete Sequences with Small Vocabularies That Somewhat Works’”.

Goldberg also questioned the use of GAN, which has yielded impressive results for modeling images, but not so much for natural language processing. GAN puts two networks in an adversarial training loop — a generator network comes up with realistic outputs, and then a discriminator network distinguishes the generated outputs from real examples. Eventually, the generator network learns to produce results realistic enough to “fool” the discriminator network.

But Goldberg argued the generator network in the paper only created near-one-hot-vectors, which is a very sharp distribution. He was unsure whether this method should be encouraged in the study of natural language.

Goldberg also argued that the results were not appropriately evaluated. For example, MILA researchers constructed a data generating distribution from a context-free grammar (CFG) or a probabilistic context-free grammar (P−CFG) for an evaluation strategy, but in Goldberg’s view, neither of these grammars fit with natural language. “They include such impressive natural language sentences as ‘what everything they take everything away from.’ These are not even grammatical!” mocked Goldberg. “These guys should really have consulted with someone who worked with natural language before.”

The paper’s authors have not yet officially responded to Goldberg’s blog.

When the dust settled

The focus of the discussion eventually evolved from the paper itself to the deep learning community and arXiv.

In contrast to the traditionally lengthy peer review process, arXiv’s “bazzar-like model of collaboration” accelerates the process of releasing scientific papers, making it a rich and valuable repository. ArXiv’s submission rate for May 2017 was 11,194 papers, five times the number for May 1997. The trade-off, naturally, is quality — not all papers on arXiv are up to scientific standards.

Goldberg was concerned about a paper coming from MILA having methodological flaws: “Why do I care that some paper got on arXiv? Because many people take these papers seriously, especially when they come from a reputable lab like MILA. And now every work on either natural language generation or adversarial learning for text will have to cite ‘Rajeswar et al 2017’. And they will accumulate citations. And reputation. Despite being a really, really poor work when it comes to language generation.” This is a trending problem on arXiv, as some users — including deep learning researchers — abuse the platform by over-stating their results or click-baiting with catchy titles.

Goldberg’s words ruffled feathers in the deep learning community, which was probably why LeCun fought back. LeCun implied Goldberg’s blog was a typical defensive piece that pitted natural language processing researchers against deep learning researchers. Goldberg is not unfamiliar with deep learning, in fact he has widely deployed deep learning methodology in his own research, including his 2015 paper Neural Network Methods for Natural Language Processing.

Meanwhile, Google VP and Engineering Fellow Fernando Pereira broke a three-year blogging silence to weigh in on the debate with a short, comedic piece on the “computational linguistic farce.” Pereira sardonically cast traditional NLP researchers as the old guard, and DL researchers as disruptors invading their territory.

While it was somewhat surprising to see such respected researchers sparring in the public arena, presenting frank and contrary opinions on social media is, quite simply, one way that debates take place these days. Why should scientists be any different?


Author: Tony Peng | Editor: Michael Sarazen | Producer: Chain Zhang

1 comment on “Deep Learning Paper Sparks Online Feud!

  1. Larry Martin

    Interesting how a single tweet can ignite a heated debate in the world of deep learning! Yoav Goldberg’s strong reaction to the Montreal Institute for Learning Algorithms’ paper adds fuel to the fire.
    foundation excavation services in Monroe Washington

Leave a Reply

Your email address will not be published. Required fields are marked *

%d