The Neural Information Processing Systems Conference (NIPS) has become the world’s leading gathering for machine learning and computational neuroscience. Thousands submit academic papers to NIPS each year, hoping their research results will be selected, posted, presented and demonstrated at the prestigious conference.
Each year NIPS recruits a large number of peer reviewers to write detailed reviews on submitted papers, which will then be published anonymously. Peer reviewers are assumed to be researchers in the field of machine learning or neuroscience with sufficient experience and knowledge.
But NIPS’ peer reviewer selection process came under question in the AI community last week, when a Reddit user who identified as a predoctoral student posted that they had been selected as a NIPS reviewer, and needed advice on how to properly write paper reviews:
“I’m starting graduate school in the fall so I’ve never submitted or reviewed papers for this conference before. How do I chose papers to review? Should I start reading old NIPS papers to get an idea? Most importantly, how do I write a good review?”
A few suggestions appeared, such as “explicitly state what you did not understand/do not feel competent to judge” and “focus on the main points that contribute to your overall score/decision”. But many more commenters questioned the original poster’s suitability as a NIPS peer reviewer.
“If you have never written a paper for NIPS or any other ML conference, you should not be reviewing papers.”
“The review process is already very noisy, and I doubt you are capable of providing an accurate review. You currently don’t have the background knowledge to judge a paper.”
The Reddit post sparked a wide-ranging discussion across social media, with thousands of comments from the AI community on Twitter and Facebook. Assistant Professor at Carnegie Mellon University Zachery Lipton tweeted that,
“some people just really don’t know enough to be qualified reviewers, and one summer school won’t change that.”
Ilan University Senior Lecturer and well-known natural language processing (NLP) expert Yoav Goldberg replied to Lipton’s tweet with a sarcastic tone,
“yup. It’s ‘peer review’, not ‘person who did 5 TensorFlow tutorials review’.”
A record-high 3,240 papers were submitted to NIPS 2017, and someone had to read them. NIPS organizers have no way to deal with the increasing paper volume but to expand their supply of reviewers.
NIPS 2018 Press and Media Co-Chair Katherine Gorman told Synced, “Reviewers are an integral part of NIPS and the conference could not take place without their hard work. This year (as in 2016) we asked every reviewer in our initial reviewer pool to suggest (at least) one peer who works on similar topics and whom they would trust as a reviewer of their own papers on these topics. We asked that suggested reviewers be senior PhD students (with at least two papers at top-tier conferences, comparable to NIPS) or above (i.e., postdocs, professors, etc.).”
But from the perspective of AI researchers whose months or even years of academic study can be reflected in one paper, it is natural that there is serious concern over possibly unqualified reviewers reading for a top AI conference.
The Distinguished Professor at Oregon State University and previous President of Association for the Advancement of Artificial Intelligence Thomas G. Dietterich stressed that the exponential growth of the field should lead to an exponential growth in the quality of reviewers.
The week-long NIPS 2018 will kick off in Montreal on December 3. Registration opens in September.
Journalist: Tony Peng| Editor: Michael Sarazen