The NeurIPS 2020 paper reviews were sent out last Friday, starting the author response phase. While many researchers are pondering how to draft their rebuttals, others are decrying what they see as problematic or even “terrible” reviews. It’s that time of year, and controversies are again swirling around the prestigious machine learning conference’s review process.
The Director of Machine Learning Research at NVIDIA and California Institute of Technology Professor Anima Anandkumar tweeted she’s “seeing terrible @NeurIPSConf reviews for the nth time.” Anandkumar suggested banning reviewers from submitting papers “if they can’t write a review in good faith.”
Terrible how? Google Brain Senior Staff Software Engineer Rohan Anil tweeted, “Reviewer #4 – at least complete your sentence. I don’t have access to GPT3 to complete the ‘Th’ you left in the one sentence review, do you mean thunder, or Thursday.”
Graham Neubig had a different take, tweeting “Sure, we did get a couple 1-sentence reviews, major misunderstandings, “not novel” (no citations), but overall things were OK.” The CMU associate professor added, “I think there’s always space to make the review process better (and we’re working to do so for *ACL), but I’d like to thank all the people who put the time into doing a good job of it!”
In a first, the NeurIPS organizing committee this year required all paper authors to also agree to conduct paper reviews if asked. “This requirement is useful for increasing our reviewer pool size and to fairly distribute the reviewing load more evenly among community members who submit papers,” reads a NeurIPS blog post. “We believe that all authors should be willing to contribute their time and expertise as reviewers.” The motivation for the change is further explained in a YouTube video.
This year witnessed a 38 percent increase in paper submissions for NeurIPS. The scheme of recruiting reviewers from authors of submitted papers has resulted in 2400 out of 7800 reviewers also being submitting authors. Each paper is assigned to three reviewers and one area chair (AC), with submitter-reviewers typically given low quotas of two or three papers.
According to the AI Index 2019 Report, the volume of AI papers published in peer-reviewed journals showed a year-on increase of more than 300 percent. Major AI conferences like NeurIPS, AAAI, and CVPR have been breaking paper submission records every year. This has led to various complaints regarding long delays, inconsistent standards, and unqualified reviewers in the peer review process.
While the reviewers judge papers, who judges the reviewers? GoogleAI Research Scientist Hossein Mobahi, who was also an AC for NeurIPS 2019, tweeted, “Currently in ICML and NeurIPS, ACs already score reviewers. AFAIK this score is only used for best reviewers recognition, to keep reviewers motivated. It can similarly serve as a scoring method to remove lingering bad reviewers (this should be overseen by a larger group of ACs).”
While Oregon State University Distinguished Professor Thomas G. Dietterich isn’t on board with banning reviewers from submitting works as authors, he did suggest that better reviewer training may be required. “We also need to rethink the entire conference publication system for the 21st century,” reads his tweet.
Professor Dietterich isn’t alone in calling for wholesale changes in this regard. As Synced previously reported, Turing awardee Yoshua Bengio has proposed authors first submit papers to a fast turnaround journal, and conference program committees could then select the papers they like from the list of accepted and reviewed papers.
With even senior researchers unsure of how to address issues in paper review processes, it’s understandable that NeurIPS rookies would be perplexed when their works receive problematic reviews. KLE Technological University Research Assistant Adarsh Jamadandi tweeted, “So one of @NeurIPSConf reviewer says The paper is not clearly written. One of the reviewer says well written and very suitable for #NeurIPS2020. As a first timer, how should I write a rebuttal? Any insights are welcome please…”
NeurIPS paper authors can respond to the reviews and engage in discussions with reviewers during a rebuttal period that runs through August 13, and the final decisions will be sent to authors by the end of September.
NeurIPS 2020 will be a virtual-only event running December 5 through 12.
Reporter: Fangyu Cai | Editor: Michael Sarazen
Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors
This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle. Along with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.
Click here to find more reports from us.
We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.
Pingback: NeurIPS Paper Reviews Released, Controversies Resurface - GistTree
Pingback: Google, Stanford, & MIT Top NeurIPS 2020 Accepted Papers List – tensor.io
Honest reviews and reviews are very important for businesses and various industries. You’re right, people should review reviews responsibly. Also, Pissedconsumer.com can help you with online reviews.