Abstract
AbstractAs machine learning-based models continue to be developed for healthcare applications, greater effort is needed in ensuring that these technologies do not reflect or exacerbate any unwanted or discriminatory biases that may be present in the data. In this study, we introduce a reinforcement learning framework capable of mitigating biases that may have been acquired during data collection. In particular, we evaluated our model for the task of rapidly predicting COVID-19 for patients presenting to hospital emergency departments, and aimed to mitigate any site-specific (hospital) and ethnicity-based biases present in the data. Using a specialized reward function and training procedure, we show that our method achieves clinically-effective screening performances, while significantly improving outcome fairness compared to current benchmarks and state-of-the-art machine learning methods. We performed external validation across three independent hospitals, and additionally tested our method on a patient ICU discharge status task, demonstrating model generalizability.
Publisher
Cold Spring Harbor Laboratory
Reference31 articles.
1. A survey on bias and fairness in machine learning;ACM Computing Surveys (CSUR),2021
2. Angwin, J. , Larson, J. , Mattu, S. , & Kirchner, L. (2016). Machine Bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
3. Can AI help reduce disparities in general medical and mental health care?;AMA journal of ethics,2019
4. Genetic Misdiagnoses and the Potential for Health Disparities
5. Diversity in clinical and biomedical research: a promise yet to be fulfilled;PLoS medicine,2015
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献