Trading off accuracy and explainability in AI decision-making: findings from 2 citizens’ juries

Author:

van der Veer Sabine N1ORCID,Riste Lisa23,Cheraghi-Sohi Sudeh24,Phipps Denham L3,Tully Mary P3,Bozentko Kyle5,Atwood Sarah5,Hubbard Alex6,Wiper Carl6,Oswald Malcolm78,Peek Niels12ORCID

Affiliation:

1. Centre for Health Informatics, Division of Informatics, Imaging and Data Science, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK

2. NIHR Greater Manchester Patient Safety Translational Research Centre, School of Health Sciences, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK

3. Division of Pharmacy and Optometry, School of Health Sciences, The University of Manchester, Manchester, UK

4. Division of Population Health, Health Services Research & Primary Care, School of Health Sciences, The University of Manchester, Manchester, UK

5. Jefferson Center, Saint Paul, Minnesota, USA

6. Information Commissioner’s Office, Wilmslow, UK

7. School of Law, Faculty of Humanities, The University of Manchester, Manchester, UK

8. Citizens’ Juries CIC, Manchester, UK

Abstract

Abstract Objective To investigate how the general public trades off explainability versus accuracy of artificial intelligence (AI) systems and whether this differs between healthcare and non-healthcare scenarios. Materials and Methods Citizens’ juries are a form of deliberative democracy eliciting informed judgment from a representative sample of the general public around policy questions. We organized two 5-day citizens’ juries in the UK with 18 jurors each. Jurors considered 3 AI systems with different levels of accuracy and explainability in 2 healthcare and 2 non-healthcare scenarios. Per scenario, jurors voted for their preferred system; votes were analyzed descriptively. Qualitative data on considerations behind their preferences included transcribed audio-recordings of plenary sessions, observational field notes, outputs from small group work and free-text comments accompanying jurors’ votes; qualitative data were analyzed thematically by scenario, per and across AI systems. Results In healthcare scenarios, jurors favored accuracy over explainability, whereas in non-healthcare contexts they either valued explainability equally to, or more than, accuracy. Jurors’ considerations in favor of accuracy regarded the impact of decisions on individuals and society, and the potential to increase efficiency of services. Reasons for emphasizing explainability included increased opportunities for individuals and society to learn and improve future prospects and enhanced ability for humans to identify and resolve system biases. Conclusion Citizens may value explainability of AI systems in healthcare less than in non-healthcare domains and less than often assumed by professionals, especially when weighed against system accuracy. The public should therefore be actively consulted when developing policy on AI explainability.

Funder

National Institute for Health Research Greater Manchester Patient Safety Translational Research Centre

Information Commissioner’s Office

Publisher

Oxford University Press (OUP)

Subject

Health Informatics

Cited by 21 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3