With the increasing deployment of AI systems in high-stakes decision-making domains such as healthcare, finance and law, the technology’s explainability has become an issue of public concern. Explainable AI (XAI) is critical for earning the trust of end-users with regard to the outputs generated by machine learning (ML) algorithms, and the research community in recent years has strived to bring more transparency to the inner workings of AI systems by addressing this “black box” problem. The question of just who is going to open the box has however remained relatively underexplored.
In a new paper, a team from Georgia Institute of Technology, Cornell University and IBM Research conducts a mixed-methods study on how people with and without expert knowledge of AI perceive different types of AI explanations.
The team summarizes the study’s contributions as:
- We quantify the user preferences (what) of three types of AI explanations along five dimensions of user perceptions.
- We qualitatively situate how one’s AI background (or lack thereof) influences one’s perception of the explanations.
- We elucidate why the group differences might exist and interpret them through the conceptual lenses of heuristics and appropriation.
- Using our findings, we identify potentially negative consequences (like harmful manipulation of user perceptions and over-trust in XAI systems) and propose mitigation strategies.
The researchers first conducted a systematic review of related work regarding trust, acceptance, and engagement of autonomous or AI systems, followed by informal interviews with six experts spanning human-computer interactions (HCI), AI, and human-robot interactions (HRI).
The team employed a task environment to situate the design of their AI agents. Human participants from AI and non-AI groups observed three agents (Robots A, B and C) as they performed an identical sequence of actions. The key difference between the agents is how they “think out loud” for the humans’ benefit as they performed these actions:
- The Rationale-Generating (RG) agent (Robot A) uses natural language rationales explaining the “why” behind its actions.
- The Action-Declaring (AD) agent (Robot B) states its actions in natural language but without any justifications.
- The Numerical-Reasoning (NR) agent (Robot C) simply outputs numerical Q-values for the current state, with no language component.
The human participants ranked the three robots along five perception dimensions (understandability, confidence, intelligence, friendliness, and second chance, i.e. how past failures impact future collaboration chances), and justified their choices using open-ended text responses.
The AI group comprised students enrolled in CS programs and taking AI courses, while those in the non-AI group were recruited from Amazon Mechanical Turk (AMT). All participants watched an orientation video outlining a scenario and were then asked to imagine themselves as space explorers faced with a search-and-rescue mission involving robots. The participants then watched six counterbalanced and randomized videos showing the three robots succeeding and failing to retrieve the essential mission supplies using identical sequences of actions. After ranking on a dimension, participants justified and contextualized their rankings via a free-text response format.
The team gained a number of valuable insights from their empirical studies:
- Both the AI and non-AI groups had unwarranted faith in numbers, but exhibited this for different reasons and to differing degrees, with the AI group showing higher propensity to over-trust numerical representations and potentially be mislead by them.
- Both groups found different explanatory values beyond the usage that the explanations were designed for.
- Even with their aligned appreciation for human-likeness, the two groups had different requirements concerning what counts as humanlike explanations.
Overall, the paper provides a novel perspective on how people with and without AI backgrounds form and express their perceptions of AI explanations. The researchers regard the work as a formative step in advancing a pluralistic human-centred XAI discourse that can help bridge the creator-consumer gap in AI.
The paper The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations is on arXiv.
Author: Hecate He | Editor: Michael Sarazen, Chain Zhang
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
Pingback: r/artificial - [R] The ‘Who’ in Explainable AI: New Study Explores the Creator-Consumer Gap - Cyber Bharat