We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Effects of Explanation Strategy and Autonomy of Explainable AI on Human–AI Collaborative Decision-making.
- Authors
Wang, Bingcheng; Yuan, Tianyi; Rau, Pei-Luen Patrick
- Abstract
This study examined the effects of explanation strategy (global explanation vs. deductive explanation vs. contrastive explanation) and autonomy level (high vs. low) of explainable agents on human–AI collaborative decision-making. A 3 × 2 mixed-design experiment was conducted. The decision-making task was a modified Mahjong game. Forty-eight participants were divided into three groups, each collaborating with an agent with a different explanation strategy. Each agent had two autonomy levels. The results indicated that global explanation incurred the lowest mental workload and highest understandability. Contrastive explanation required the highest mental workload but incurred the highest perceived competence, affect-based trust, and social presence. Deductive explanation was found to be the worst in terms of social presence. The high-autonomy agents incurred lower mental workload and interaction fluency but higher faith and social presence than the low-autonomy agents. The findings of this study can help practitioners in designing user-centered explainable decision-support agents and choosing appropriate explanation strategies for different situations.
- Subjects
ARTIFICIAL intelligence; DECISION making; EXPLANATION; TRUST
- Publication
International Journal of Social Robotics, 2024, Vol 16, Issue 4, p791
- ISSN
1875-4791
- Publication type
Article
- DOI
10.1007/s12369-024-01132-2