I Know This Looks Bad, But I Can Explain: Understanding When AI Should Explain Actions In Human-AI Teams

Author:

Zhang Rui1,Flathmann Christopher1,Musick Geoff1,Schelble Beau1,McNeese Nathan J.1,Knijnenburg Bart1,Duan Wen1

Affiliation:

1. Clemson University, USA

Abstract

Explanation of artificial intelligence (AI) decision-making has become an important research area in human-computer interaction (HCI) and computer-supported teamwork research. While plenty of research has investigated AI explanations with an intent to improve AI transparency and human trust in AI, how AI explanations function in teaming environments remains unclear. Given that a major benefit of AI giving explanations is to increase human trust understanding how AI explanations impact human trust is crucial to effective human-AI teamwork. An online experiment was conducted with 156 participants to explore this question by examining how a teammate’s explanations impact the perceived trust of the teammate and the effectiveness of the team and how these impacts vary based on whether the teammate is a human or an AI. This study shows that explanations facilitate trust in AI teammates when explaining why AI disobeyed humans’ orders but hindered trust when explaining why an AI lied to humans. In addition, participants’ personal characteristics (e.g., their gender and the individual’s ethical framework) impacted their perceptions of AI teammates both directly and indirectly in different scenarios. Our study contributes to interactive intelligent systems and HCI by shedding light on how an AI teammate’s actions and corresponding explanations are perceived by humans while identifying factors that impact trust and perceived effectiveness. This work provides an initial understanding of AI explanations in human-AI teams, which can be used for future research to build upon in exploring AI explanation implementation in collaborative environments.

Publisher

Association for Computing Machinery (ACM)

Subject

Artificial Intelligence,Human-Computer Interaction

Reference112 articles.

1. Trends and Trajectories for Explainable, Accountable and Intelligible Systems

2. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

3. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

4. Vijay Arya , Rachel  KE Bellamy , Pin-Yu Chen , Amit Dhurandhar , Michael Hind , Samuel  C Hoffman , Stephanie Houde , Q Vera Liao , Ronny Luss , Aleksandra Mojsilović , et al . 2019 . One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012(2019). Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, et al. 2019. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012(2019).

5. Katrin Auspurg and Thomas Hinz . 2014. Factorial survey experiments. Vol.  175 . Sage Publications . Katrin Auspurg and Thomas Hinz. 2014. Factorial survey experiments. Vol.  175. Sage Publications.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3