Abstract
Abstract
New technologies are changing the way we interact with the world around us, and we tend to use them based on the assumption of their neutrality. This, however, is far from the truth. The blind spots of algorithmic models reflect the goals and ideologies of its developers and the society in which they live, and run the risk of replicating and amplifying human biases. This paper analyzes the harmful racial biases that are present in predictive digital health algorithms and the best way to regulate them. To answer the research questions, a meta-analysis was carried out of prognostic COVID-19 models developed for clinical use within the US using an analytic framework designed to reveal the risk for harmful racial biases. Of the five models observed, all presented with medium risk for bias. Possible policy recommendations for mitigating this bias include establishing national ethics standards, diversifying the AI workforce, investing in transparent data access symptoms, and improving biased measurement tools. While previous attempts to regulate this space have been made, to fully address racial bias in digital health, policymakers must acknowledge the historical systems of oppression that shape us and by extension, our technologies, especially in such a high-risk setting as healthcare.
Publisher
Research Square Platform LLC
Reference111 articles.
1. How to Measure Racism in Academic Health Centers;Adkins-Jackson PB;AMA Journal of Ethics,2021
2. Algorithmic Accountability Act of 2022. (2022). https://www.wyden.senate.gov/imo/media/doc/2022-02-03%20Algorithmic%20Accountability%20Act%20of%202022%20One-pager.pdf
3. Bias in medical artificial intelligence;AlHasan A;The Bulletin of the Royal College of Surgeons of England,2021
4. Alston, P. (2019, October 17). World stumbling zombie-like into a digital welfare dystopia, warns UN human rights expert. United Nations Human Rights Office. https://www.ohchr.org/en/press-releases/2019/10/world-stumbling-zombie-digital-welfare-dystopia-warns-un-human-rights-expert
5. American Medical Association. (2018). Augmented intelligence in health care* 1 * Content derived from Augmented Intelligence