An objective framework for evaluating unrecognized bias in medical AI models predicting COVID-19 outcomes

Author:

Estiri Hossein12ORCID,Strasser Zachary H12ORCID,Rashidian Sina34ORCID,Klann Jeffrey G125ORCID,Wagholikar Kavishwar B12ORCID,McCoy Thomas H6,Murphy Shawn N1578

Affiliation:

1. Laboratory of Computer Science, Massachusetts General Hospital , Boston, Massachusetts, USA

2. Department of Medicine, Massachusetts General Hospital , Boston, Massachusetts, USA

3. Verily Life Sciences , Boston, Massachusetts, USA

4. Massachusetts General Hospital, Boston, MA 02114, USA

5. Research Information Science and Computing, Mass General Brigham , Somerville, Massachusetts, USA

6. Center for Quantitative Health, Massachusetts General Hospital , Boston, Massachusetts, USA

7. Department of Biomedical Informatics, Harvard Medical School , Boston, Massachusetts, USA

8. Department of Neurology, Massachusetts General Hospital , Boston, Massachusetts, USA

Abstract

Abstract Objective The increasing translation of artificial intelligence (AI)/machine learning (ML) models into clinical practice brings an increased risk of direct harm from modeling bias; however, bias remains incompletely measured in many medical AI applications. This article aims to provide a framework for objective evaluation of medical AI from multiple aspects, focusing on binary classification models. Materials and Methods Using data from over 56 000 Mass General Brigham (MGB) patients with confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), we evaluate unrecognized bias in 4 AI models developed during the early months of the pandemic in Boston, Massachusetts that predict risks of hospital admission, ICU admission, mechanical ventilation, and death after a SARS-CoV-2 infection purely based on their pre-infection longitudinal medical records. Models were evaluated both retrospectively and prospectively using model-level metrics of discrimination, accuracy, and reliability, and a novel individual-level metric for error. Results We found inconsistent instances of model-level bias in the prediction models. From an individual-level aspect, however, we found most all models performing with slightly higher error rates for older patients. Discussion While a model can be biased against certain protected groups (ie, perform worse) in certain tasks, it can be at the same time biased towards another protected group (ie, perform better). As such, current bias evaluation studies may lack a full depiction of the variable effects of a model on its subpopulations. Conclusion Only a holistic evaluation, a diligent search for unrecognized bias, can provide enough information for an unbiased judgment of AI bias that can invigorate follow-up investigations on identifying the underlying roots of bias and ultimately make a change.

Publisher

Oxford University Press (OUP)

Subject

Health Informatics

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3