Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

Author:

Chen Valerie1ORCID,Liao Q. Vera2ORCID,Wortman Vaughan Jennifer3ORCID,Bansal Gagan4ORCID

Affiliation:

1. Carnegie Mellon University, Pittsburgh, PA, USA

2. Microsoft Research, Montreal, Canada

3. Microsoft Research, New York, NY, USA

4. Microsoft Research, Redmond, WA, USA

Abstract

AI explanations are often mentioned as a way to improve human-AI decision-making, but empirical studies have not found consistent evidence of explanations' effectiveness and, on the contrary, suggest that they can increase overreliance when the AI system is wrong. While many factors may affect reliance on AI support, one important factor is how decision-makers reconcile their own intuition---beliefs or heuristics, based on prior knowledge, experience, or pattern recognition, used to make judgments---with the information provided by the AI system to determine when to override AI predictions. We conduct a think-aloud, mixed-methods study with two explanation types (feature- and example-based) for two prediction tasks to explore how decision-makers' intuition affects their use of AI predictions and explanations, and ultimately their choice of when to rely on AI. Our results identify three types of intuition involved in reasoning about AI predictions and explanations: intuition about the task outcome, features, and AI limitations. Building on these, we summarize three observed pathways for decision-makers to apply their own intuition and override AI predictions. We use these pathways to explain why (1) the feature-based explanations we used did not improve participants' decision outcomes and increased their overreliance on AI, and (2) the example-based explanations we used improved decision-makers' performance over feature-based explanations and helped achieve complementary human-AI performance. Overall, our work identifies directions for further development of AI decision-support systems and explanation methods that help decision-makers effectively apply their intuition to achieve appropriate reliance on AI.

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Networks and Communications,Human-Computer Interaction,Social Sciences (miscellaneous)

Reference87 articles.

1. Amina Adadi and Mohammed Berrada . 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI) . IEEE access, Vol. 6 ( 2018 ), 52138--52160. Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE access, Vol. 6 (2018), 52138--52160.

2. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

3. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance

4. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance

5. Data-Driven Decisions for Reducing Readmissions for Heart Failure: General Methodology and Case Study

Cited by 20 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3