The Measurement and Mitigation of Algorithmic Bias and Unfairness in Healthcare AI Models Developed for the CMS AI Health Outcomes Challenge

Author:

McCall Carol J.ORCID,DeCaprio DaveORCID,Gartner Joseph

Abstract

AbstractAlgorithms play an increasingly prevalent role in healthcare, and are used to target interventions, reward performance, and distribute resources, including funding. Yet it is widely recognized that many algorithms used today may inadvertently encode and perpetuate biases and contribute to health inequities. Artificial intelligence algorithms, in addition to being assessed for accuracy, must be evaluated with respect to whether they could impact disparities in health outcomes.This paper presents details and results of ClosedLoop’s methods to measure and mitigate bias in machine learning models that were the winning submission in the CMS AI Health Outcomes Challenge. The submission applied a comprehensive framework for assessing algorithmic bias and fairness and the development and application of a metric appropriate for real-world healthcare settings capable of being used to assess and reduce the presence and impact of unfairness.The submission demonstrated precision and transparency in the comprehensive measurement of algorithmic bias from multiple sources, including data representativeness, subgroup validity, label choice, and feature bias. For feature bias, the submission made a detailed examination of feature selection and diversity, including evaluating the appropriateness of including race in algorithm development. It also demonstrated how fairness criteria could be used to adjust care management enrollment thresholds to mitigate unfairness.Computational methods and measures exist that allow healthcare organizations to measure and mitigate algorithmic bias and fairness in models used in practical healthcare settings. It is possible for healthcare organizations to adopt policies and practices that enable them to design, implement, and maintain algorithms that are highly accurate, unbiased, and fair.Author summaryAI has come of age through the alchemy of cheap parallel (cloud) computing combined with the availability of big data and better algorithms. Problems that seemed unconquerable a few years ago are being solved, at times with startling gains. AI has finally arrived in health care, where the stakes are high, and the complexity and criticality of issues can far outweigh other applications. AI’s arrival is good; organizations are confronting forces strong enough that they may only yield once AI is brought to bear. AI has started to play a central role in targeting care interventions, rewarding physician performance, and distributing resources, including funding.Here’s the problem: If health care’s algorithms are biased — something that researchers at the Center for Applied Artificial Intelligence at the University of Chicago’s Booth School of Business have concluded — then AI solutions designed to drive better outcomes can make things worse. The good news is that these experts also said that algorithmic bias, while pervasive, is not inevitable. The key is to define the processes and tools that can help measure and address it. The work presented in this paper represent an important contribution to these tools and a real-world demonstration of results.

Publisher

Cold Spring Harbor Laboratory

Reference24 articles.

1. Accountable Care Organizations and Health Care Disparities

2. Does Pay-for-Performance Steal From the Poor and Give to the Rich?;Annals of Internal Medicine,2010

3. Syed M , Mehmud. Nontraditional Variables in Healthcare Risk Adjustment Society of Actuaries’ Health Section [Internet]. 2013. Available from: https://www.soa.org/globalassets/assets/files/research/projects/research-2013-nontrad-var-health-risk.pdf

4. A population health approach to reducing observational intensity bias in health risk adjustment: cross sectional analysis of insurance claims

5. Dissecting racial bias in an algorithm used to manage the health of populations;Science [Internet],2019

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3