Bias and fairness assessment of a natural language processing opioid misuse classifier: detection and mitigation of electronic health record data disadvantages across racial subgroups

Author:

Thompson Hale M1,Sharma Brihat1,Bhalla Sameer1,Boley Randy1,McCluskey Connor1,Dligach Dmitriy2,Churpek Matthew M3,Karnik Niranjan S1,Afshar Majid3

Affiliation:

1. Department of Psychiatry & Behavioral Sciences, Rush University Medical Center, Chicago, Illinois, USA

2. Department of Computer Science, Loyola University, Chicago, Illinois, USA

3. Department of Medicine, University of Wisconsin, Madison, Wisconsin, USA

Abstract

Abstract Objectives To assess fairness and bias of a previously validated machine learning opioid misuse classifier. Materials & Methods Two experiments were conducted with the classifier’s original (n = 1000) and external validation (n = 53 974) datasets from 2 health systems. Bias was assessed via testing for differences in type II error rates across racial/ethnic subgroups (Black, Hispanic/Latinx, White, Other) using bootstrapped 95% confidence intervals. A local surrogate model was estimated to interpret the classifier’s predictions by race and averaged globally from the datasets. Subgroup analyses and post-hoc recalibrations were conducted to attempt to mitigate biased metrics. Results We identified bias in the false negative rate (FNR = 0.32) of the Black subgroup compared to the FNR (0.17) of the White subgroup. Top features included “heroin” and “substance abuse” across subgroups. Post-hoc recalibrations eliminated bias in FNR with minimal changes in other subgroup error metrics. The Black FNR subgroup had higher risk scores for readmission and mortality than the White FNR subgroup, and a higher mortality risk score than the Black true positive subgroup (P < .05). Discussion The Black FNR subgroup had the greatest severity of disease and risk for poor outcomes. Similar features were present between subgroups for predicting opioid misuse, but inequities were present. Post-hoc mitigation techniques mitigated bias in type II error rate without creating substantial type I error rates. From model design through deployment, bias and data disadvantages should be systematically addressed. Conclusion Standardized, transparent bias assessments are needed to improve trustworthiness in clinical machine learning models.

Funder

Agency for Healthcare Research & Quality

National Institute for Drug Abuse

National Institute of Drug Abuse

National Institute on Alcohol Abuse and Alcoholism

National Institute on Drug Abuse

National Center for Advancing Translational Sciences

Dr. Matthew M. Churpeck declares patent pending

National Institute of General Medical Sciences

National Library of Medicine

Publisher

Oxford University Press (OUP)

Subject

Health Informatics

Cited by 43 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3