Abstract
AbstractClinical risk prediction models integrated in digitized healthcare systems hold promise for personalized primary prevention and care. Fairness metrics are important tools for evaluating potential disparities across sensitive features in the field of prediction modeling. In this paper, we seek to assess the uptake of fairness metrics in clinical risk prediction modeling by conducting a scoping literature review of recent high impact publications in the areas of cardiovascular disease and COVID-19. Our review shows that fairness metrics have rarely been used in clinical risk prediction modeling despite their ability to identify inequality and flag potential discrimination. We also find that the data used in clinical risk prediction models remain largely demographically homogeneous, demonstrating an urgent need for collecting and using data from diverse populations. To address these issues, we suggest specific strategies for increasing the use of fairness metrics while developing clinical risk prediction models.
Publisher
Cold Spring Harbor Laboratory
Reference51 articles.
1. A clarification of the nuances in the fairness metrics landscape;Scientific Reports,2022
2. Mehrabi N , Morstatter F , Saxena N , Lerman K , Galstyan A . A Survey on Bias and Fairness in Machine Learning. Association for Computing Machinery. 2021;54(6).
3. Can medical algorithms be fair?;Three ethical quandaries and one dilemma. BMJ Health Care Inform,2022
4. Algorithmic Fairness: Choices, Assumptions, and Definitions;Annu Rev Stat Appl,2021
5. Suresh H , Guttag J. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. In New York, NY, USA; 2021.