Affiliation:
1. Texas A&M University, USA
2. University of Florida, Gainesville, FL, USA
Abstract
The need for interpretable and accountable intelligent systems grows along with the prevalence of
artificial intelligence
(
AI
) applications used in everyday life.
Explainable AI
(
XAI
) systems are intended to self-explain the reasoning behind system decisions and predictions. Researchers from different disciplines work together to define, design, and evaluate explainable systems. However, scholars from different disciplines focus on different objectives and fairly independent topics of XAI research, which poses challenges for identifying appropriate design and evaluation methodology and consolidating knowledge across efforts. To this end, this article presents a survey and framework intended to share knowledge and experiences of XAI design and evaluation methods across multiple disciplines. Aiming to support diverse design goals and evaluation methods in XAI research, after a thorough review of XAI related papers in the fields of machine learning, visualization, and human-computer interaction, we present a categorization of XAI design goals and evaluation methods. Our categorization presents the mapping between design goals for different XAI user groups and their evaluation methods. From our findings, we develop a framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams. Further, we provide summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research.
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Human-Computer Interaction
Reference226 articles.
1. Trends and Trajectories for Explainable, Accountable and Intelligible Systems
2. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
3. Julius Adebayo Justin Gilmer Michael Muelly Ian Goodfellow Moritz Hardt and Been Kim. 2018. Sanity checks for saliency maps. In Advances in Neural Information Processing Systems. 9505–9515. Julius Adebayo Justin Gilmer Michael Muelly Ian Goodfellow Moritz Hardt and Been Kim. 2018. Sanity checks for saliency maps. In Advances in Neural Information Processing Systems. 9505–9515.
4. FairSight: Visual analytics for fairness in decision making;Ahn Yongsu;IEEE Transactions on Visualization and Computer Graphics,2019
5. Task-Driven Comparison of Topic Models
Cited by
299 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献