While contemporary large-scale language models have excelled in text generation, these models are designed to generate only a final text and lack capabilities — such as the modifying and refining drafts — that characterize real-world collaborative writing workflows and are crucial for developing accurate and high-quality final texts.
A research team from Meta AI, Carnegie Mellon University, PSL University, and University College London addresses this limitation in the new paper PEER: A Collaborative Language Model. Their proposed PEER (Plan, Edit, Explain, Repeat) collaborative language model produces texts following a humanlike process — composing drafts, adding suggestions, proposing edits and providing explanations for its actions.
The team summarizes their main contributions as follows:
- We introduce PEER, a collaborative language model trained primarily on Wikipedia edit histories.
- By training PEER to infill parts of the writing process and leveraging self-training techniques, we make it applicable in any domain and enhance several of its core capabilities essential for collaborative writing.
- For different tasks related to editing texts, we show that PEER clearly outperforms various baselines; and analyze factors leading to its strong performance.
- To facilitate further research on collaborative LMs, we release a variety of PEER models as well as the data and code used to train them.
The PEER model comprises four main steps: Plan, Edit, Explain, and Repeat. Given an input text, the user or the PEER model can first specify a plan with regard to actions to be applied. This plan is then realized via edits that the model explains using textual comments and reference citing. PEER repeats this process until it generates the desired output.
For their empirical study, the team initialized all instances of PEER from an LM-Adapted T5 (Text-to-Text Transfer Transformer, Raffel et al., 2020). They compared it with baselines (OPT, GPT3, etc.) to evaluate its ability to follow plans and perform meaningful edits in domains with no available edit histories; and to examine how the PEER-Undo, PEER-Explain, and PEER-Document encoder-decoder models can boost performance.
The results show that PEER can continuously improve output quality during the iterative process and achieves impressive performance across various domains and editing tasks.
Overall, this work demonstrates that PEER can serve as a helpful and humanlike writing assistant that widens the scope and advances the performance of intelligent agents in producing high-quality textual outputs.
The paper PEER: A Collaborative Language Model is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.