A bias evaluation checklist for predictive models and its pilot application for 30-day hospital readmission models

Author:

Wang H Echo1,Landers Matthew2,Adams Roy3,Subbaswamy Adarsh4,Kharrazi Hadi1,Gaskin Darrell J1,Saria Suchi4

Affiliation:

1. Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health , Baltimore, Maryland, USA

2. Department of Computer Science, University of Virginia , Charlottesville, Virginia, USA

3. Department of Psychiatry and Behavioral Sciences, Johns Hopkins School of Medicine , Baltimore, Maryland, USA

4. Department of Computer Science and Statistics, Whiting School of Engineering, Johns Hopkins University , Baltimore, Maryland, USA

Abstract

Abstract Objective Health care providers increasingly rely upon predictive algorithms when making important treatment decisions, however, evidence indicates that these tools can lead to inequitable outcomes across racial and socio-economic groups. In this study, we introduce a bias evaluation checklist that allows model developers and health care providers a means to systematically appraise a model’s potential to introduce bias. Materials and Methods Our methods include developing a bias evaluation checklist, a scoping literature review to identify 30-day hospital readmission prediction models, and assessing the selected models using the checklist. Results We selected 4 models for evaluation: LACE, HOSPITAL, Johns Hopkins ACG, and HATRIX. Our assessment identified critical ways in which these algorithms can perpetuate health care inequalities. We found that LACE and HOSPITAL have the greatest potential for introducing bias, Johns Hopkins ACG has the most areas of uncertainty, and HATRIX has the fewest causes for concern. Discussion Our approach gives model developers and health care providers a practical and systematic method for evaluating bias in predictive models. Traditional bias identification methods do not elucidate sources of bias and are thus insufficient for mitigation efforts. With our checklist, bias can be addressed and eliminated before a model is fully developed or deployed. Conclusion The potential for algorithms to perpetuate biased outcomes is not isolated to readmission prediction models; rather, we believe our results have implications for predictive models across health care. We offer a systematic method for evaluating potential bias with sufficient flexibility to be utilized across models and applications.

Publisher

Oxford University Press (OUP)

Subject

Health Informatics

Reference114 articles.

1. Machine learning and health care disparities in dermatology;Adamson;JAMA Dermatol,2018

2. Impact of a deep learning assistant on the histopathologic classification of liver cancer;Kiani;NPJ Digit Med,2020

3. Automated identification of adults at risk for in-hospital clinical deterioration. Reply;Escobar;N Engl J Med,2021

4. Development and performance of a clinical decision support tool to inform resource utilization for elective operations;Goldstein;JAMA Netw Open,2020

5. Predicting population health with machine learning: a scoping review;Morgenstern;BMJ Open,2020

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3