Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review

Author:

Verma Sahil1ORCID,Boonsanong Varich1ORCID,Hoang Minh1ORCID,Hines Keegan2ORCID,Dickerson John2ORCID,Shah Chirag3ORCID

Affiliation:

1. Computer Science and Engineering, University of Washington, Seattle, United States

2. Arthur AI, Washington DC, United States

3. University of Washington, Seattle, United States

Abstract

Machine learning plays a role in many deployed decision systems, often in ways that are difficult or impossible to understand by human stakeholders. Explaining, in a human-understandable way, the relationship between the input and output of machine learning models is essential to the development of trustworthy machine learning based systems. A burgeoning body of research seeks to define the goals and methods of explainability in machine learning. In this paper, we seek to review and categorize research on counterfactual explanations , a specific class of explanation that provides a link between what could have happened had input to a model been changed in a particular way. Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries, making them appealing to fielded systems in high-impact areas such as finance and healthcare. Thus, we design a rubric with desirable properties of counterfactual explanation algorithms and comprehensively evaluate all currently proposed algorithms against that rubric. Our rubric provides easy comparison and comprehension of the advantages and disadvantages of different approaches and serves as an introduction to major research themes in this field. We also identify gaps and discuss promising research directions in the space of counterfactual explainability.

Publisher

Association for Computing Machinery (ACM)

Reference381 articles.

1. Abubakar Abid, Mert Yuksekgonul, and James Zou. 2022. Meaningfully Debugging Model Mistakes using Conceptual Counterfactual Explanations. In Proceedings of the 39th International Conference on Machine Learning. PMLR, 66–88. https://proceedings.mlr.press/v162/abid22a.html

2. Counterfactual Graphs for Explainable Classification of Brain Networks

3. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

4. Charu C. Aggarwal Chen Chen and Jiawei Han. 2010. The Inverse Classification Problem. J. Comput. Sci. Technol.(2010) 458–468. https://doi.org/10.1007/s11390-010-9337-x

5. Ulrich Aïvodji, Hiromi Arai, Olivier Fortineau, Sébastien Gambs, Satoshi Hara, and Alain Tapp. 2019. Fairwashing: the Risk of Rationalization. In Proceedings of the 36th International Conference on Machine Learning. PMLR. https://proceedings.mlr.press/v97/aivodji19a.html

Cited by 8 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Adversarial Machine Learning for Social Good: Reframing the Adversary as an Ally;IEEE Transactions on Artificial Intelligence;2024-09

2. Reinforced Path Reasoning for Counterfactual Explainable Recommendation;IEEE Transactions on Knowledge and Data Engineering;2024-07

3. Multicriteria Model-Agnostic Counterfactual Explainability for Classifiers;2024 International Joint Conference on Neural Networks (IJCNN);2024-06-30

4. Explainable Database Management System Configuration Tuning through Counterfactuals;2024 IEEE 40th International Conference on Data Engineering (ICDE);2024-05-13

5. Efficient Feature Selection Algorithm Based on Counterfactuals;2024 6th International Conference on Communications, Information System and Computer Engineering (CISCE);2024-05-10

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3