Affiliation:
1. Division of Hematology and Medical Oncology, Icahn School of Medicine at Mount Sinai, New York, New York
2. Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York, New York
3. Department of Anesthesiology, Perioperative and Pain Medicine, Icahn School of Medicine at Mount Sinai, New York, New York
4. Institute for Healthcare Delivery Science, Icahn School of Medicine at Mount Sinai, New York, New York
Abstract
ImportanceMachine learning has potential to transform cancer care by helping clinicians prioritize patients for serious illness conversations. However, models need to be evaluated for unequal performance across racial groups (ie, racial bias) so that existing racial disparities are not exacerbated.ObjectiveTo evaluate whether racial bias exists in a predictive machine learning model that identifies 180-day cancer mortality risk among patients with solid malignant tumors.Design, Setting, and ParticipantsIn this cohort study, a machine learning model to predict cancer mortality for patients aged 21 years or older diagnosed with cancer between January 2016 and December 2021 was developed with a random forest algorithm using retrospective data from the Mount Sinai Health System cancer registry, Social Security Death Index, and electronic health records up to the date when databases were accessed for cohort extraction (February 2022).ExposureRace category.Main Outcomes and MeasuresThe primary outcomes were model discriminatory performance (area under the receiver operating characteristic curve [AUROC], F1 score) among each race category (Asian, Black, Native American, White, and other or unknown) and fairness metrics (equal opportunity, equalized odds, and disparate impact) among each pairwise comparison of race categories. True-positive rate ratios represented equal opportunity; both true-positive and false-positive rate ratios, equalized odds; and the percentage of predictive positive rate ratios, disparate impact. All metrics were estimated as a proportion or ratio, with variability captured through 95% CIs. The prespecified criterion for the model’s clinical use was a threshold of at least 80% for fairness metrics across different racial groups to ensure the model’s prediction would not be biased against any specific race.ResultsThe test validation dataset included 43 274 patients with balanced demographics. Mean (SD) age was 64.09 (14.26) years, with 49.6% older than 65 years. A total of 53.3% were female; 9.5%, Asian; 18.9%, Black; 0.1%, Native American; 52.2%, White; and 19.2%, other or unknown race; 0.1% had missing race data. A total of 88.9% of patients were alive, and 11.1% were dead. The AUROCs, F1 scores, and fairness metrics maintained reasonable concordance among the racial subgroups: the AUROCs ranged from 0.75 (95% CI, 0.72-0.78) for Asian patients and 0.75 (95% CI, 0.73-0.77) for Black patients to 0.77 (95% CI, 0.75-0.79) for patients with other or unknown race; F1 scores, from 0.32 (95% CI, 0.32-0.33) for White patients to 0.40 (95% CI, 0.39-0.42) for Black patients; equal opportunity ratios, from 0.96 (95% CI, 0.95-0.98) for Black patients compared with White patients to 1.02 (95% CI, 1.00-1.04) for Black patients compared with patients with other or unknown race; equalized odds ratios, from 0.87 (95% CI, 0.85-0.92) for Black patients compared with White patients to 1.16 (1.10-1.21) for Black patients compared with patients with other or unknown race; and disparate impact ratios, from 0.86 (95% CI, 0.82-0.89) for Black patients compared with White patients to 1.17 (95% CI, 1.12-1.22) for Black patients compared with patients with other or unknown race.Conclusions and RelevanceIn this cohort study, the lack of significant variation in performance or fairness metrics indicated an absence of racial bias, suggesting that the model fairly identified cancer mortality risk across racial groups. It remains essential to consistently review the model’s application in clinical settings to ensure equitable patient care.
Publisher
American Medical Association (AMA)