Developer Perspectives on Potential Harms of Machine Learning Predictive Analytics in Health Care: Qualitative Analysis

Author:

Nichol Ariadne AORCID,Sankar Pamela LORCID,Halley Meghan CORCID,Federico Carole AORCID,Cho Mildred KORCID

Abstract

Background Machine learning predictive analytics (MLPA) is increasingly used in health care to reduce costs and improve efficacy; it also has the potential to harm patients and trust in health care. Academic and regulatory leaders have proposed a variety of principles and guidelines to address the challenges of evaluating the safety of machine learning–based software in the health care context, but accepted practices do not yet exist. However, there appears to be a shift toward process-based regulatory paradigms that rely heavily on self-regulation. At the same time, little research has examined the perspectives about the harms of MLPA developers themselves, whose role will be essential in overcoming the “principles-to-practice” gap. Objective The objective of this study was to understand how MLPA developers of health care products perceived the potential harms of those products and their responses to recognized harms. Methods We interviewed 40 individuals who were developing MLPA tools for health care at 15 US-based organizations, including data scientists, software engineers, and those with mid- and high-level management roles. These 15 organizations were selected to represent a range of organizational types and sizes from the 106 that we previously identified. We asked developers about their perspectives on the potential harms of their work, factors that influence these harms, and their role in mitigation. We used standard qualitative analysis of transcribed interviews to identify themes in the data. Results We found that MLPA developers recognized a range of potential harms of MLPA to individuals, social groups, and the health care system, such as issues of privacy, bias, and system disruption. They also identified drivers of these harms related to the characteristics of machine learning and specific to the health care and commercial contexts in which the products are developed. MLPA developers also described strategies to respond to these drivers and potentially mitigate the harms. Opportunities included balancing algorithm performance goals with potential harms, emphasizing iterative integration of health care expertise, and fostering shared company values. However, their recognition of their own responsibility to address potential harms varied widely. Conclusions Even though MLPA developers recognized that their products can harm patients, public, and even health systems, robust procedures to assess the potential for harms and the need for mitigation do not exist. Our findings suggest that, to the extent that new oversight paradigms rely on self-regulation, they will face serious challenges if harms are driven by features that developers consider inescapable in health care and business environments. Furthermore, effective self-regulation will require MLPA developers to accept responsibility for safety and efficacy and know how to act accordingly. Our results suggest that, at the very least, substantial education will be necessary to fill the “principles-to-practice” gap.

Publisher

JMIR Publications Inc.

Subject

Health Informatics

Reference62 articles.

1. Artificial intelligence: the next digital frontier?McKinsey Global Institute20172023-10-31https://tinyurl.com/2czhnxfv

2. The state of data sharing at the U.S. department of health and human servicesDepartment of Health and Human Services20182023-10-31https://www.hhs.gov/sites/default/files/HHS_StateofDataSharing_0915.pdf

3. National health expenditure projections 2018-2027Centers for Medicare and Medicaid Services20182023-10-31https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/Downloads/ForecastSummary.pdf

4. MiliardMGeisinger, IBM develop new predictive algorithm to detect sepsis riskHealthcare IT News20192023-10-31https://tinyurl.com/ykhky9vd

5. Machine learning based early warning system enables accurate mortality risk prediction for COVID-19

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3