“Figuring out how to flexibly coordinate a collaborative endeavour is a fundamental challenge for any agent in a multi-agent world,” explains a team of researchers from MIT, UChicago, Harvard and Diffeo. Their recipe for cooperation is Bayesian Delegation, a decentralized multi-agent learning mechanism that enables agents to coordinate their behaviour on the fly. Bayesian Delegation uses Bayesian inference with inverse planning to rapidly infer the sub-tasks other agents are working on, a probabilistic approach that enables agents to better predict others’ intentions despite uncertainty and ambiguity of behaviours.
The paper Too Many Cooks: Bayesian Inference for Coordinating Multi-Agent Collaboration points to theory-of-mind (ToM) capabilities that allow humans to understand intentions from actions and cooperate in coordinated ways, and proposes that Bayesian Delegation could equip AI agents with such abilities.
In their research, the team recreated the environment of Overcooked!, a popular co-op cooking simulation game where players control kitchen chefs who overcome various obstacles and hazards to prepare meal orders. Players must collaborate to deliver orders promptly, which makes the environment ideal for the study of real-time decision-making and strategy development in AI agents.
The team choose cooking as it offers many similar features to other object-oriented tasks such as construction and assembly. The sub-tasks such as chopping, plating, and delivering also allowed researchers to study agents under challenging coordination conditions:
- Divide and conquer: agents should work in parallel when sub-tasks can be efficiently carried out individually
- Cooperation: agents should work together on the same sub-task when most efficient or necessary
- Spatio-temporal movement: agents should avoid getting in each other’s way at any time
While other deep reinforcement learning studies have used Overcooked!-inspired environments to train agents using self-play and human data, the new approach focus on techniques that agents dynamically learn while interacting with others. The researchers say the work shares goals with “ad-hoc coordination literature, where agents must adapt on the fly to variations in task, environment, or team.”
In evaluations in a suite of new multi-agent environments, Bayesian Delegation outperformed four baseline agent algorithms in a self-play evaluation. It also showed capabilities for ad-hoc collaboration and coordination with other agent types even without prior experience. And encouragingly, behavioural experiments suggested that Bayesian Delegation inferences about others’ intents are close to human judgements.
The team notes that as systems scale up their number of agents, “there can be ‘too many cooks’ in the kitchen!” and that efficiently coordinating the behaviour of larger groups of agents will be essential for partnering agents with human teams or other agents in such complex environments.
The paper Too Many Cooks: Bayesian Inference for Coordinating Multi-Agent Collaboration is on arXiv.
Reporter: Fangyu Cai | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
Thinking of contributing to Synced Review? Synced’s new column Share My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.