We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Trading off accuracy and explainability in AI decision-making: findings from 2 citizens' juries.
- Authors
Veer, Sabine N van der; Riste, Lisa; Cheraghi-Sohi, Sudeh; Phipps, Denham L; Tully, Mary P; Bozentko, Kyle; Atwood, Sarah; Hubbard, Alex; Wiper, Carl; Oswald, Malcolm; Peek, Niels; van der Veer, Sabine N
- Abstract
<bold>Objective: </bold>To investigate how the general public trades off explainability versus accuracy of artificial intelligence (AI) systems and whether this differs between healthcare and non-healthcare scenarios.<bold>Materials and Methods: </bold>Citizens' juries are a form of deliberative democracy eliciting informed judgment from a representative sample of the general public around policy questions. We organized two 5-day citizens' juries in the UK with 18 jurors each. Jurors considered 3 AI systems with different levels of accuracy and explainability in 2 healthcare and 2 non-healthcare scenarios. Per scenario, jurors voted for their preferred system; votes were analyzed descriptively. Qualitative data on considerations behind their preferences included transcribed audio-recordings of plenary sessions, observational field notes, outputs from small group work and free-text comments accompanying jurors' votes; qualitative data were analyzed thematically by scenario, per and across AI systems.<bold>Results: </bold>In healthcare scenarios, jurors favored accuracy over explainability, whereas in non-healthcare contexts they either valued explainability equally to, or more than, accuracy. Jurors' considerations in favor of accuracy regarded the impact of decisions on individuals and society, and the potential to increase efficiency of services. Reasons for emphasizing explainability included increased opportunities for individuals and society to learn and improve future prospects and enhanced ability for humans to identify and resolve system biases.<bold>Conclusion: </bold>Citizens may value explainability of AI systems in healthcare less than in non-healthcare domains and less than often assumed by professionals, especially when weighed against system accuracy. The public should therefore be actively consulted when developing policy on AI explainability.
- Subjects
UNITED Kingdom; ARTIFICIAL intelligence; JURY; DELIBERATIVE democracy; DECISION making; JURORS; RESEARCH; RESEARCH methodology; MEDICAL care; MEDICAL cooperation; EVALUATION research; COMPARATIVE studies; RESEARCH funding
- Publication
Journal of the American Medical Informatics Association, 2021, Vol 28, Issue 10, p2128
- ISSN
1067-5027
- Publication type
journal article
- DOI
10.1093/jamia/ocab127