The Association for Computational Linguistics (ACL) held its 57th annual meeting July 28 to August 2 in Florence, Italy. Today, the ACL 2019 organizing committee announced its eight paper awards: Best Long Paper, Best Short Paper, Best Demo Paper, and five Outstanding Paper awards.
Best Long Paper
Bridging the Gap between Training and Inference for Neural Machine Translation from researchers at Chinese Academy of Sciences, Tencent, Worcester Polytechnic Institute, and Huawei Noah’s Ark Lab. The paper addresses the issue by sampling context words both from the ground truth sequence and the predicted sequence by a model during training. Researchers tested the approach on Chinese to English and WMT’14 English to German translation tasks, and achieved significant improvements on various datasets. Click here to read the full paper.
Best short paper
Do you know that Florence is packed with visitors? Evaluating state-of-the-art models of speaker commitment from researchers at the Ohio State University. The paper examines the hypothesis that linguistic deficits drive the error patterns of speaker commitment models. Researchers analyze the linguistic correlates of model errors on a challenging naturalistic dataset. Click here to read the full paper.
Best demo paper
OpenKiwi: An Open Source Framework for Quality Estimation from researchers at Unbabel and Instituto de Telecomunicac¸oes. The paper introduces the Pytorch-based open source framework OpenKiwi for translation quality estimation. Click here to read the full paper.
Outstanding paper awards
Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts from researchers at Nanjing University of Science and Technology. Researchers propose a new task called “emotion-cause pair extraction (ECPE)” to extract potential pairs of emotions and corresponding causes in a document. Click here to read the full paper.
A Simple Theoretical Model of Importance for Summarization from researcher Maxime Peyrard at École polytechnique fédérale de Lausanne (EPFL). Peyrard argues in the paper that establishing theoretical models of Importance will help provide a better understanding of the task and improve summarization systems. Click here to read the full paper.
Transferable Multi-Domain State Generator for Task-Oriented from researchers at the Hong Kong University of Science and Technology and Salesforce Research. In this paper, researchers introduce a Transferable Dialogue State Generator (TRADE) to generate dialogue states from utterances leveraging copy mechanism, facilitating knowledge transfer when predicting triplets. Click here to read the full paper.
We need to talk about standard splits from researchers at City University of New York and Oregon Health & Science University. Researchers examine the stability of system ranking across multiple training-testing splits by conducting replication and reproduction experiments with part-of-speech taggers that claimed state-of-the-art performance on a popular “standard split.” Researchers failed to reproduce some rankings using randomly generated splits, and suggest randomly generated splits be used in system comparison. Click here to read the full paper.
Zero-Shot Entity Linking by Reading Entity Descriptions from researchers at University of Michigan and Google Research. The paper presents the zero-shot entity linking task. To enable robust transfer to highly specialized domains, researchers assumed no metadata or alias. Click here to read the full paper.
This year, the ACL reported a record-breaking submission total of 2,906 papers, almost doubling last year’s 1,544 submissions. The 58th annual meeting of the Association for Computational Linguistics (ACL 2020) will take place in downtown Seattle, Washington from July 5 through July 10, 2020.
Journalist: Fangyu Cai | Editor: Michael Sarazen