Codified Racism in Digital Health Platforms A Meta-Analysis of COVID-19 Prediction Algorithms and their Policy Implications

Author:

Hislop Maalana1

Affiliation:

1. Tufts University

Abstract

Abstract New technologies are changing the way we interact with the world around us, and we tend to use them based on the assumption of their neutrality. This, however, is far from the truth. The blind spots of algorithmic models reflect the goals and ideologies of its developers and the society in which they live, and run the risk of replicating and amplifying human biases. This paper analyzes the harmful racial biases that are present in predictive digital health algorithms and the best way to regulate them. To answer the research questions, a meta-analysis was carried out of prognostic COVID-19 models developed for clinical use within the US using an analytic framework designed to reveal the risk for harmful racial biases. Of the five models observed, all presented with medium risk for bias. Possible policy recommendations for mitigating this bias include establishing national ethics standards, diversifying the AI workforce, investing in transparent data access symptoms, and improving biased measurement tools. While previous attempts to regulate this space have been made, to fully address racial bias in digital health, policymakers must acknowledge the historical systems of oppression that shape us and by extension, our technologies, especially in such a high-risk setting as healthcare.

Publisher

Research Square Platform LLC

Reference111 articles.

1. How to Measure Racism in Academic Health Centers;Adkins-Jackson PB;AMA Journal of Ethics,2021

2. Algorithmic Accountability Act of 2022. (2022). https://www.wyden.senate.gov/imo/media/doc/2022-02-03%20Algorithmic%20Accountability%20Act%20of%202022%20One-pager.pdf

3. Bias in medical artificial intelligence;AlHasan A;The Bulletin of the Royal College of Surgeons of England,2021

4. Alston, P. (2019, October 17). World stumbling zombie-like into a digital welfare dystopia, warns UN human rights expert. United Nations Human Rights Office. https://www.ohchr.org/en/press-releases/2019/10/world-stumbling-zombie-digital-welfare-dystopia-warns-un-human-rights-expert

5. American Medical Association. (2018). Augmented intelligence in health care* 1 * Content derived from Augmented Intelligence

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3