Abstract
AbstractAlgorithms play an increasingly prevalent role in healthcare, and are used to target interventions, reward performance, and distribute resources, including funding. Yet it is widely recognized that many algorithms used today may inadvertently encode and perpetuate biases and contribute to health inequities. Artificial intelligence algorithms, in addition to being assessed for accuracy, must be evaluated with respect to whether they could impact disparities in health outcomes.This paper presents details and results of ClosedLoop’s methods to measure and mitigate bias in machine learning models that were the winning submission in the CMS AI Health Outcomes Challenge. The submission applied a comprehensive framework for assessing algorithmic bias and fairness and the development and application of a metric appropriate for real-world healthcare settings capable of being used to assess and reduce the presence and impact of unfairness.The submission demonstrated precision and transparency in the comprehensive measurement of algorithmic bias from multiple sources, including data representativeness, subgroup validity, label choice, and feature bias. For feature bias, the submission made a detailed examination of feature selection and diversity, including evaluating the appropriateness of including race in algorithm development. It also demonstrated how fairness criteria could be used to adjust care management enrollment thresholds to mitigate unfairness.Computational methods and measures exist that allow healthcare organizations to measure and mitigate algorithmic bias and fairness in models used in practical healthcare settings. It is possible for healthcare organizations to adopt policies and practices that enable them to design, implement, and maintain algorithms that are highly accurate, unbiased, and fair.Author summaryAI has come of age through the alchemy of cheap parallel (cloud) computing combined with the availability of big data and better algorithms. Problems that seemed unconquerable a few years ago are being solved, at times with startling gains. AI has finally arrived in health care, where the stakes are high, and the complexity and criticality of issues can far outweigh other applications. AI’s arrival is good; organizations are confronting forces strong enough that they may only yield once AI is brought to bear. AI has started to play a central role in targeting care interventions, rewarding physician performance, and distributing resources, including funding.Here’s the problem: If health care’s algorithms are biased — something that researchers at the Center for Applied Artificial Intelligence at the University of Chicago’s Booth School of Business have concluded — then AI solutions designed to drive better outcomes can make things worse. The good news is that these experts also said that algorithmic bias, while pervasive, is not inevitable. The key is to define the processes and tools that can help measure and address it. The work presented in this paper represent an important contribution to these tools and a real-world demonstration of results.
Publisher
Cold Spring Harbor Laboratory
Reference24 articles.
1. Accountable Care Organizations and Health Care Disparities
2. Does Pay-for-Performance Steal From the Poor and Give to the Rich?;Annals of Internal Medicine,2010
3. Syed M , Mehmud. Nontraditional Variables in Healthcare Risk Adjustment Society of Actuaries’ Health Section [Internet]. 2013. Available from: https://www.soa.org/globalassets/assets/files/research/projects/research-2013-nontrad-var-health-risk.pdf
4. A population health approach to reducing observational intensity bias in health risk adjustment: cross sectional analysis of insurance claims
5. Dissecting racial bias in an algorithm used to manage the health of populations;Science [Internet],2019
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献