Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables

Author:

Holstein Kenneth1ORCID,De-Arteaga Maria2ORCID,Tumati Lakshmi1ORCID,Cheng Yanghuidi1ORCID

Affiliation:

1. Carnegie Mellon University, Pittsburgh, PA, USA

2. The University of Texas at Austin, Austin, TX, USA

Abstract

In many real world contexts, successful human-AI collaboration requires humans to productively integrate complementary sources of information into AI-informed decisions. However, in practice human decision-makers often lack understanding of what information an AI model has access to, in relation to themselves. There are few available guidelines regarding how to effectively communicate aboutunobservables: features that may influence the outcome, but which are unavailable to the model. In this work, we conducted an online experiment to understand whether and how explicitly communicating potentially relevant unobservables influences how people integrate model outputs and unobservables when making predictions. Our findings indicate that presenting prompts about unobservables can change how humans integrate model outputs and unobservables, but do not necessarily lead to improved performance. Furthermore, the impacts of these prompts can vary depending on decision-makers' prior domain expertise. We conclude by discussing implications for future research and design of AI-based decision support tools.

Funder

UL Research Institutes through the Center for Advancing Safety of Machine Intelligence

CMU Block Center

UT Austin Good Systems

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Networks and Communications,Human-Computer Interaction,Social Sciences (miscellaneous)

Reference53 articles.

1. If you give a judge a risk score: Evidence from Kentucky bail decisions;Albright Alex;Harvard John M. Olin Fellow's Discussion Paper,2019

2. The TA Framework: Designing Real-time Teaching Augmentation for K-12 Classrooms

3. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance

4. Zana Bucc inca, Maja Barbara Malaya , and Krzysztof Z Gajos . 2021 . To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making . Proceedings of the ACM on Human-Computer Interaction , Vol. 5 , CSCW1 (2021), 1--21. Zana Bucc inca, Maja Barbara Malaya, and Krzysztof Z Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (2021), 1--21.

5. How Child Welfare Workers Reduce Racial Disparities in Algorithmic Decisions

Cited by 7 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3