A scientometric analysis of fairness in health AI literature

Author:

Alberto Isabelle Rose I.ORCID,Alberto Nicole Rose I.,Altinel Yuksel,Blacker SarahORCID,Binotti William Warr,Celi Leo AnthonyORCID,Chua TiffanyORCID,Fiske AmeliaORCID,Griffin Molly,Karaca Gulce,Mokolo Nkiruka,Naawu David Kojo NORCID,Patscheider JonathanORCID,Petushkov AntonORCID,Quion Justin MichaelORCID,Senteio Charles,Taisbak Simon,Tırnova İsmailORCID,Tokashiki Harumi,Velasquez AdrianORCID,Yaghy Antonio,Yap Keagan

Abstract

Artificial intelligence (AI) and machine learning are central components of today’s medical environment. The fairness of AI, i.e. the ability of AI to be free from bias, has repeatedly come into question. This study investigates the diversity of members of academia whose scholarship poses questions about the fairness of AI. The articles that combine the topics of fairness, artificial intelligence, and medicine were selected from Pubmed, Google Scholar, and Embase using keywords. Eligibility and data extraction from the articles were done manually and cross-checked by another author for accuracy. Articles were selected for further analysis, cleaned, and organized in Microsoft Excel; spatial diagrams were generated using Public Tableau. Additional graphs were generated using Matplotlib and Seaborn. Linear and logistic regressions were conducted using Python to measure the relationship between funding status, number of citations, and the gender demographics of the authorship team. We identified 375 eligible publications, including research and review articles concerning AI and fairness in healthcare. Analysis of the bibliographic data revealed that there is an overrepresentation of authors that are white, male, and are from high-income countries, especially in the roles of first and last author. Additionally, analysis showed that papers whose authors are based in higher-income countries were more likely to be cited more often and published in higher impact journals. These findings highlight the lack of diversity among the authors in the AI fairness community whose work gains the largest readership, potentially compromising the very impartiality that the AI fairness community is working towards.

Publisher

Public Library of Science (PLoS)

Reference27 articles.

1. Artificial intelligence and algorithmic bias: implications for health systems;T Panch;J Glob Health,2019

2. ProPublica—Journalism in the Public Interest;Propublica;United States

3. Millions of black people affected by racial bias in health-care algorithms;H. Ledford;Nature,2019

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3