Studies have shown that humans naturally learn languages at an early age largely through linguistic interactions with caregivers such as parents and teachers. Although modern machine learning techniques and architectures have produced powerful language models that can generate humanlike speech and text, interaction has thus far played little to no role in the development of these systems.
Motivated by the power and potential of language interaction, a research team from the University of Amsterdam and Meta AI Labs recently published the paper Towards Interactive Language Modeling, proposing a road map plotting the steps toward effective interactive language modelling and demonstrating the feasibility of the approach.

The researchers regard this as a pioneering work in the interactive language modelling space, and summarize their contributions as:
- We define the objective of interactive language modelling.
- We present a road map that details the steps that need to be taken towards this objective.
- We take the first steps on this road map, which show the initial feasibility of our approach.

The proposed interactive language modelling method is based on a teacher-student setup comprising four aspects: a teacher, a student, the interactions between them, and their shared environment. The objective is to build an automated teacher-student loop that will attain good student performance for a fixed (low) number of bits transmitted in the interactions.
The researchers believe enabling such interactions can make language models more efficient and versatile. The teacher can adapt its teaching strategies based on student feedback, and a teacher fluent in one domain can teach its specifics to a student trained on another domain and vice versa. Such an interaction method could also improve performance on downstream applications such as second-language teaching.
In the proposed approach, the teachers transmit language data to their students based on a budget: the “(low) fixed number of bits.” This budget forces the teacher to actively choose and refine a learning strategy, as it cannot simply transfer its knowledge to the student en masse.

The researchers apply this setup to take the first steps on their road map. They focus on teacher size, aiming to train a teacher that will optimally help its student learn the language. They represent the teacher’s language understanding with a pretrained causal transformer language model, in effect modelling the teacher as a native speaker. The student is also represented as a causal transformer language model. Based on its learning strategy, the teacher sends selected data to the student, and the student uses this data to train its language model. The student then takes an exam, with the score sent back to the teacher as feedback the teacher can use to further adapt its learning strategy.


The proposed method was evaluated on two tasks: teaching languages with different domains and teaching languages with different structures. The results show that the teacher learns to gradually converge to the best teaching strategy, validating the feasibility of the approach.
Overall, this study is inspired and informed by how humans naturally learn languages through interactions, taking a first step toward interactive language modelling. The team hopes their work can inspire a larger research agenda in this area.
The paper Towards Interactive Language Modelling is on arXiv.
Author: Hecate He | Editor: Michael Sarazen

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
Pingback: r/artificial - [R] University of Amsterdam & Meta AI Propose a Roadmap Toward Interactive Language Modelling Based on Caregiver-Child Interactions - Cyber Bharat
Pingback: University of Amsterdam & Meta AI Propose a Roadmap Toward Interactive Language Modelling Based on Caregiver-Child Interactions | June 2023 | Artificial Intelligence Journal