Abstract
AbstractThe unintended biases introduced by optimization and machine learning (ML) models are a topic of great interest to medical professionals. Bias in healthcare decisions can cause patients from vulnerable populations (e.g., racially minoritized, low-income, or living in rural areas) to have lower access to resources and inferior outcomes, thus exacerbating societal unfairness. In this systematic literature review, we present a structured overview of the literature regarding fair decision making in healthcare until April 2024. After screening 782 unique references, we identified 103 articles within the scope of our review. We categorize the identified articles into the following three sections: algorithmic bias, fairness metrics, and bias mitigation techniques. Specifically, we identify examples of algorithmic, data, and publication bias as they are typically encountered in research and practice. Subsequently, we define and discuss the fairness metrics previously considered in the literature, including notions of fairness through unawareness, demographic parity, equal opportunity, and equal odds. Lastly, we summarize the bias mitigation techniques available in the optimization and ML literature by classifying them into pre-processing, in-processing, and post-processing approaches. Fairness in decision making is an emerging field, poised to substantially reduce social inequities and improve the overall well-being of underrepresented groups. Our review aims to increase awareness of fairness in healthcare decision making and facilitate the selection of appropriate approaches under varying scenarios.
Publisher
Cold Spring Harbor Laboratory