Evaluating machine-generated explanations: a “Scorecard” method for XAI measurement science

Author:

Hoffman Robert R.,Jalaeian Mohammadreza,Tate Connor,Klein Gary,Mueller Shane T.

Abstract

IntroductionMany Explainable AI (XAI) systems provide explanations that are just clues or hints about the computational models-Such things as feature lists, decision trees, or saliency images. However, a user might want answers to deeper questions such as How does it work?, Why did it do that instead of something else? What things can it get wrong? How might XAI system developers evaluate existing XAI systems with regard to the depth of support they provide for the user's sensemaking? How might XAI system developers shape new XAI systems so as to support the user's sensemaking? What might be a useful conceptual terminology to assist developers in approaching this challenge?MethodBased on cognitive theory, a scale was developed reflecting depth of explanation, that is, the degree to which explanations support the user's sensemaking. The seven levels of this scale form the Explanation Scorecard.Results and discussionThe Scorecard was utilized in an analysis of recent literature, showing that many systems still present low-level explanations. The Scorecard can be used by developers to conceptualize how they might extend their machine-generated explanations to support the user in developing a mental model that instills appropriate trust and reliance. The article concludes with recommendations for how XAI systems can be improved with regard to the cognitive considerations, and recommendations regarding the manner in which results on the evaluation of XAI systems are reported.

Publisher

Frontiers Media SA

Subject

Computer Science Applications,Computer Vision and Pattern Recognition,Human-Computer Interaction,Computer Science (miscellaneous)

Reference103 articles.

1. AbdollahiB. NasraouiO. Explainable Restricted Boltzmann Machines for Collaborative Filtering2016

2. Peeking inside the black-box: a survey on explainable artificial intelligence;Adadi;IEEE Access,2018

3. “Sanity checks for saliency maps,”p. 95259536 AdebayoJ. GilmerJ. MuellyM. GoodfellowI. HardtM. KimB. Proceedings of the 32nd International Conference on Neural Information Processing Systems.2018

4. “CoCoX: Generating conceptual and counterfactual explanations via fault-Lines,”;Akula,2020

5. AmarasingheK. RodolfaK. T. JesusS. ChenV. BalayanV. SaleiroP. On the Importance of Application-Grounded Experimental Design for Evaluating Explainable AL methods2022

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Situated Interpretation and Data: Explainability to Convey Machine Misalignment;IEEE Transactions on Human-Machine Systems;2024-02

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3