"If it is easy to understand then it will have value": Examining Perceptions of Explainable AI with Community Health Workers in Rural India

Author:

Okolo Chinasa T.1ORCID,Agarwal Dhruv1ORCID,Dell Nicola2ORCID,Vashistha Aditya1ORCID

Affiliation:

1. Cornell University, Ithaca, NY, USA

2. Cornell Tech, New York, NY, USA

Abstract

AI-driven tools are increasingly deployed to support low-skilled community health workers (CHWs) in hard-to-reach communities in the Global South. This paper examines how CHWs in rural India engage with and perceive AI explanations and how we might design explainable AI (XAI) interfaces that are more understandable to them. We conducted semi-structured interviews with CHWs who interacted with a design probe to predict neonatal jaundice in which AI recommendations are accompanied by explanations. We (1) identify how CHWs interpreted AI predictions and the associated explanations, (2) unpack the benefits and pitfalls they perceived of the explanations, and (3) detail how different design elements of the explanations impacted their AI understanding. Our findings demonstrate that while CHWs struggled to understand the AI explanations, they nevertheless expressed a strong preference for the explanations to be integrated into AI-driven tools and perceived several benefits of the explanations, such as helping CHWs learn new skills and improved patient trust in AI tools and in CHWs. We conclude by discussing what elements of AI need to be made explainable to novice AI users like CHWs and outline concrete design recommendations to improve the utility of XAI for novice AI users in non-Western contexts.

Funder

National Science Foundation

Publisher

Association for Computing Machinery (ACM)

Reference136 articles.

1. Reframing HCI Through Local and Indigenous Perspectives

2. Wadhwani AI. 2020a. Cough Against Covid. https://www.wadhwaniai.org/programs/cough-against-covid/ Retrieved November 28, 2022 from

3. Wadhwani AI. 2020b. Newborn Anthropometry. https://www.wadhwaniai.org/programs/newborn-anthropometry/ Retrieved November 28, 2022 from

4. Evaluating saliency map explanations for convolutional neural networks

5. Data-Centric Explanations: Explaining Training Data of Machine Learning Systems to Promote Transparency

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3