Share My Research is Synced’s column that welcomes scholars to share their own research breakthroughs with over 1.5M global AI enthusiasts. Beyond technological advances, Share My Research also calls for interesting stories behind the research and exciting research ideas. Contact us: chain.zhang@jiqizhixin.com
Meet the authors
Institutions: Penn State University, Duke University, Google DeepMind, University of Washington, Meta, Nanyang Technological University, and Oregon State University. The co-first authors are Shaokun Zhang of Penn State University and Ming Yin of Duke University.
In recent years, LLM Multi-Agent systems have garnered widespread attention for their collaborative approach to solving complex problems. However, it’s a common scenario for these systems to fail at a task despite a flurry of activity. This leaves developers with a critical question: which agent, at what point, was responsible for the failure? Sifting through vast interaction logs to pinpoint the root cause feels like finding a needle in a haystack—a time-consuming and labor-intensive effort.
This is a familiar frustration for developers. In increasingly complex Multi-Agent systems, failures are not only common but also incredibly difficult to diagnose due to the autonomous nature of agent collaboration and long information chains. Without a way to quickly identify the source of a failure, system iteration and optimization grind to a halt.
To address this challenge, researchers from Penn State University and Duke University, in collaboration with institutions including Google DeepMind, have introduced the novel research problem of “Automated Failure Attribution.” They have constructed the first benchmark dataset for this task, Who&When, and have developed and evaluated several automated attribution methods. This work not only highlights the complexity of the task but also paves a new path toward enhancing the reliability of LLM Multi-Agent systems.
The paper has been accepted as a Spotlight presentation at the top-tier machine learning conference, ICML 2025, and the code and dataset are now fully open-source.
Paper:https://arxiv.org/pdf/2505.00212
Code:https://github.com/mingyin1/Agents_Failure_Attribution
Dataset:https://huggingface.co/datasets/Kevin355/Who_and_When
Research Background and Challenges
LLM-driven Multi-Agent systems have demonstrated immense potential across many domains. However, these systems are fragile; errors by a single agent, misunderstandings between agents, or mistakes in information transmission can lead to the failure of the entire task.
Currently, when a system fails, developers are often left with manual and inefficient methods for debugging:
Manual Log Archaeology : Developers must manually review lengthy interaction logs to find the source of the problem.
Reliance on Expertise : The debugging process is highly dependent on the developer’s deep understanding of the system and the task at hand.
This “needle in a haystack” approach to debugging is not only inefficient but also severely hinders rapid system iteration and the improvement of system reliability. There is an urgent need for an automated, systematic method to pinpoint the cause of failures, effectively bridging the gap between “evaluation results” and “system improvement.”
Core Contributions
This paper makes several groundbreaking contributions to address the challenges above:
1. Defining a New Problem: The paper is the first to formalize “automated failure attribution” as a specific research task. This task is defined by identifying the failure-responsible agent and the decisive error step that led to the task’s failure.
2. Constructing the First Benchmark Dataset: Who&When : This dataset includes a wide range of failure logs collected from 127 LLM Multi-Agent systems, which were either algorithmically generated or hand-crafted by experts to ensure realism and diversity. Each failure log is accompanied by fine-grained human annotations for:
Who: The agent responsible for the failure.
When: The specific interaction step where the decisive error occurred.
Why: A natural language explanation of the cause of the failure.
3. Exploring Initial “Automated Attribution” Methods : Using the Who&When dataset, the paper designs and assesses three distinct methods for automated failure attribution:
– All-at-Once: This method provides the LLM with the user query and the complete failure log, asking it to identify the responsible agent and the decisive error step in a single pass. While cost-effective, it may struggle to pinpoint precise errors in long contexts.
– Step-by-Step: This approach mimics manual debugging by having the LLM review the interaction log sequentially, making a judgment at each step until the error is found. It is more precise at locating the error step but incurs higher costs and risks accumulating errors.
– Binary Search: A compromise between the first two methods, this strategy repeatedly divides the log in half, using the LLM to determine which segment contains the error. It then recursively searches the identified segment, offering a balance of cost and performance.
Experimental Results and Key Findings
Experiments were conducted in two settings: one where the LLM knows the ground truth answer to the problem the Multi-Agent system is trying to solve (With Ground Truth) and one where it does not (Without Ground Truth). The primary model used was GPT-4o, though other models were also tested. The systematic evaluation of these methods on the Who&When dataset yielded several important insights:
– A Long Way to Go: Current methods are far from perfect. Even the best-performing single method achieved an accuracy of only about 53.5% in identifying the responsible agent and a mere 14.2% in pinpointing the exact error step. Some methods performed even worse than random guessing, underscoring the difficulty of the task.
– No “All-in-One” Solution: Different methods excel at different aspects of the problem. The All-at-Once method is better at identifying “Who,” while the Step-by-Step method is more effective at determining “When.” The Binary Search method provides a middle-ground performance.
– Hybrid Approaches Show Promise but at a High Cost: The researchers found that combining different methods, such as using the All-at-Once approach to identify a potential agent and then applying the Step-by-Step method to find the error, can improve overall performance. However, this comes with a significant increase in computational cost.
– State-of-the-Art Models Struggle: Surprisingly, even the most advanced reasoning models, like OpenAI o1 and DeepSeek R1, find this task challenging.- This highlights the inherent difficulty of automated failure attribution, which demands a higher level of reasoning than what is required for more conventional tasks.
– The Importance of Explicit Reasoning: Providing explicit prompts that require the LLM to explain its reasoning in the All-at-Once and Step-by-Step methods was shown to improve performance. 
– Context Length is a Limiting Factor: The study also revealed that as the context length of the failure logs increases, the performance of all attribution methods tends to decrease, with a more pronounced impact on the accuracy of identifying the error step.
– Future Outlook: Paving the Way for More Reliable Multi-Agent Systems
“Automated failure attribution” is a crucial component in the development lifecycle of Multi-Agent systems. It has the potential to transform the challenge of identifying “what went wrong and who is to blame” from a perplexing mystery into a quantifiable and analyzable problem. By building a bridge between evaluation and improvement, we can ultimately create Multi-Agent systems that are more reliable, intelligent, and trustworthy.

This article rocks for the daily challenge! Soundmap Artist Guessed
kfc menu crispy chicken, or tasty sides like mash and gravy, KFC makes ordering easy with online, app, and in-store options. Plus, nutritional info helps you make healthier choices while keeping your budget in check. Stay updated with the latest prices and exciting new offers every week!
Pingback: TOPINDIATOURS Hot ai: Claude Code costs up to $200 a month. Goose does the same thing for – TOPINDIATOURS
Pingback: TOPINDIATOURS Breaking ai: You Are Not Prepared for What Actually Shut Down the El Paso Ai – TOPINDIATOURS
Pingback: TOPINDIATOURS Hot ai: Pentagon taps six vendors to build highly maneuverable Mach 5+ hyper – TOPINDIATOURS
Nailed my retro vibes with this post! Side Eye Emoji
Pingback: TOPINDIATOURS Hot ai: Listen Labs raises $69M after viral billboard hiring stunt to scale – TOPINDIATOURS
Pingback: TOPINDIATOURS Hot ai: Anthropic launches Cowork, a Claude Desktop agent that works in your – TOPINDIATOURS
Pingback: MAROKO133 Update ai: Claude Code costs up to $200 a month. Goose does the same thing for f - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Pingback: TOPINDIATOURS Update ai: The Winklevoss Twins’ Crypto Company Is in Crisis After the Bitco – TOPINDIATOURS
Pingback: MAROKO133 Breaking ai: Uber Employees Have Created an AI Clone of Its CEO Edisi Jam 13:17 - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Pingback: TOPINDIATOURS Update ai: Data Centers in Space Are Even More Cursed Than Previously Believ – TOPINDIATOURS
Pingback: MAROKO133 Hot ai: James Webb Takes Long, Hard Look Inside Uranus Edisi Jam 03:47 - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Pingback: MAROKO133 Hot ai: Light-powered soft robot jumps 188 times without motor, carries 1,700x i - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Yes, if your Navasakam application is rejected, you can reapply after addressing the reason for rejection. Review the rejection notice to identify the specific issue, such as missing documents or incorrect information. Make sure to provide the correct details and upload the required documents before submitting the new application. Once the necessary changes have been made, you can resubmit your application through the portal for further review. read more…(https://navasakaam.com/)
What a wonderful article.Many thanks for supplying this info.
This ICML 2025 spotlight work on LLM multi-agent failure attribution is groundbreaking! The Who&When dataset and three automated methods fill a critical debugging gap, even if current accuracy remains low. Hybrid approaches and explicit reasoning show promise for building more reliable agentic systems—vital research for scaling multi-agent AI!
TNREGINET https://tnreginetportaal.com/ provides an easy way to access birth certificates online. You can retrieve your birth certificate by entering the necessary details such as the registration number or the name of the individual. The system will display the certificate, which can be downloaded for official use. This online process eliminates the need to visit the local registration office.
tm sim registration officially provides three safe and authorized ways to register sim card tm in the Philippines. Users can choose the method that best suits their comfort and access, and all options follow the same secure verification process required under the SIM Registration Act. While the steps are similar for Filipinos, minors, foreigners, and businesses, the required documents may vary depending on the registrant type.
This is a groundbreaking study! Diagnosing failures in Multi-Agent systems is indeed like finding a needle in a haystack, and “Automated Failure Attribution” is exactly what the industry needs to move forward. Dealing with such complex interaction logs can be incredibly draining, so I often use Breath-wave to do some Wim Hof breathing during my deep-work breaks. Having the support right on my Watch helps me stay sharp and patient while iterating on these fragile LLM systems. Great job to the team at Penn State and Duke!
It’s interesting how the article highlights the difficulty in diagnosing failures in multi-agent systems due to the autonomous nature of agent collaboration. Pinpointing the exact agent and moment of failure seems like a crucial step toward improving these systems. I wonder how the Who&When dataset will evolve as these systems become even more complex.
Pingback: TOPINDIATOURS Hot ai: OpenAI hardware chief resigns after AI models deployed on Pentagon’s – TOPINDIATOURS
Pingback: MAROKO133 Breaking ai: Adobe Research Unlocking Long-Term Memory in Video World Models wit - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Pingback: MAROKO133 Update ai: Iran Says It’s Ready to Destroy the Global Economy Edisi Jam 04:17 - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Pingback: MAROKO133 Breaking ai: ByteDance Introduces Astra: A Dual-Model Architecture for Autonomou - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Pingback: MAROKO133 Eksklusif ai: Claude Code costs up to $200 a month. Goose does the same thing fo - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Pingback: MAROKO133 Breaking ai: Listen Labs raises $69M after viral billboard hiring stunt to scale - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Pingback: TOPINDIATOURS Eksklusif ai: Microsoft Realizes It’s Epically Screwed Up Windows 11 as User – TOPINDIATOURS
Pingback: TOPINDIATOURS Update ai: ByteDance Introduces Astra: A Dual-Model Architecture for Autonom – TOPINDIATOURS
Pingback: MAROKO133 Breaking ai: Europe’s biggest vanadium battery goes live in Spain with 8 MWh sto - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Pingback: MAROKO133 Hot ai: AI model predicts lithium battery life with up to 87 percent higher accu - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Pingback: MAROKO133 Update ai: Salesforce rolls out new Slackbot AI agent as it battles Microsoft an - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Pingback: MAROKO133 Hot ai: 4-foot-tall robot that can read gestures, facial expressions turned into - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Such a cool spot for classic gaming! Side Eye Emoji
Pingback: MAROKO133 Eksklusif ai: China’s soft bending sensor gives humanoid robot hand sense of its - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Planning your move abroad starts with the right health checks, and choosing a reliable medical center can make the whole process smooth and stress-free. Discover how visa medical center abu dhabi services combine efficiency and professionalism to help you complete your visa requirements with confidence.
Pingback: MAROKO133 Update ai: 1,200-year-old island found in Fiji is made of edible shellfish remai - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Pingback: MAROKO133 Breaking ai: Death can’t destroy a black hole: 7D model reveals remnants with st - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
Pingback: MAROKO133 Eksklusif ai: Trump Has Call With Moon Astronauts So Awkward That They May Turn - Maroko133 : Akses Mudah Ke Pusat Hiburan Digital Terpercaya
This post is a must for unblocked gaming! Side Eye Emoji
This article is a lifesaver for today’s artist guess! What Is Today’s Soundmap Artist
From weddings to graduations and everything in between, Blesing2You offers personalized gifts that make life’s special moments unforgettable. Our selection of customized gifts ensures that your loved ones receive something truly unique and meaningful.
https://blesing2you.com/
Rush Hour is an online interactive platform centered around logic-style gameplay and fast decision-making scenarios. It offers simple mechanics where users engage with dynamic situations, making quick choices to complete objectives and progress through different stages.
Through https://rushhour-game.org/, users can access browser-based sessions with smooth performance and intuitive controls, making it easy to start without downloads. The platform focuses on quick gameplay, accessibility, and a clean interface suitable for both casual and regular users.
Props to PSU & Duke for detective work on rogue LLM agents causing multi-agent flops! Haha, debug with crisp visuals using free background remover – instantly removes bgs, upscales flawlessly, 100% free, no signup drama.
Nailed my gaming fix with this site! Side Eye Emoji
This guide makes gaming here awesome! Side Eye Emoji