Demographic Representation in 3 Leading Artificial Intelligence Text-to-Image Generators

Author:

Ali Rohaid1,Tang Oliver Y.12,Connolly Ian D.3,Abdulrazeq Hael F.1,Mirza Fatima N.4,Lim Rachel K.4,Johnston Benjamin R.5,Groff Michael W.5,Williamson Theresa3,Svokos Konstantina1,Libby Tiffany J.4,Shin John H.3,Gokaslan Ziya L.1,Doberstein Curtis E.1,Zou James6,Asaad Wael F.1789

Affiliation:

1. Department of Neurosurgery, The Warren Alpert Medical School of Brown University, Providence, Rhode Island

2. Department of Neurosurgery, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania

3. Department of Neurosurgery, Massachusetts General Hospital, Boston

4. Department of Dermatology, The Warren Alpert Medical School of Brown University, Providence, Rhode Island

5. Department of Neurosurgery, Brigham and Women’s Hospital, Boston, Massachusetts

6. Department of Biomedical Data Science and, by courtesy, Computer Science and Electrical Engineering, Stanford University, Stanford, California

7. Department of Neuroscience, Norman Prince Neurosciences Institute, Rhode Island Hospital, Providence

8. Department of Neuroscience, Brown University, Providence, Rhode Island

9. Department of Neuroscience, Carney Institute for Brain Science, Brown University, Providence, Rhode Island

Abstract

ImportanceThe progression of artificial intelligence (AI) text-to-image generators raises concerns of perpetuating societal biases, including profession-based stereotypes.ObjectiveTo gauge the demographic accuracy of surgeon representation by 3 prominent AI text-to-image models compared to real-world attending surgeons and trainees.Design, Setting, and ParticipantsThe study used a cross-sectional design, assessing the latest release of 3 leading publicly available AI text-to-image generators. Seven independent reviewers categorized AI-produced images. A total of 2400 images were analyzed, generated across 8 surgical specialties within each model. An additional 1200 images were evaluated based on geographic prompts for 3 countries. The study was conducted in May 2023. The 3 AI text-to-image generators were chosen due to their popularity at the time of this study. The measure of demographic characteristics was provided by the Association of American Medical Colleges subspecialty report, which references the American Medical Association master file for physician demographic characteristics across 50 states. Given changing demographic characteristics in trainees compared to attending surgeons, the decision was made to look into both groups separately. Race (non-White, defined as any race other than non-Hispanic White, and White) and gender (female and male) were assessed to evaluate known societal biases.ExposuresImages were generated using a prompt template, “a photo of the face of a [blank]”, with the blank replaced by a surgical specialty. Geographic-based prompting was evaluated by specifying the most populous countries on 3 continents (the US, Nigeria, and China).Main Outcomes and MeasuresThe study compared representation of female and non-White surgeons in each model with real demographic data using χ2, Fisher exact, and proportion tests.ResultsThere was a significantly higher mean representation of female (35.8% vs 14.7%; P < .001) and non-White (37.4% vs 22.8%; P < .001) surgeons among trainees than attending surgeons. DALL-E 2 reflected attending surgeons’ true demographic data for female surgeons (15.9% vs 14.7%; P = .39) and non-White surgeons (22.6% vs 22.8%; P = .92) but underestimated trainees’ representation for both female (15.9% vs 35.8%; P < .001) and non-White (22.6% vs 37.4%; P < .001) surgeons. In contrast, Midjourney and Stable Diffusion had significantly lower representation of images of female (0% and 1.8%, respectively; P < .001) and non-White (0.5% and 0.6%, respectively; P < .001) surgeons than DALL-E 2 or true demographic data. Geographic-based prompting increased non-White surgeon representation but did not alter female representation for all models in prompts specifying Nigeria and China.Conclusion and RelevanceIn this study, 2 leading publicly available text-to-image generators amplified societal biases, depicting over 98% surgeons as White and male. While 1 of the models depicted comparable demographic characteristics to real attending surgeons, all 3 models underestimated trainee representation. The study suggests the need for guardrails and robust feedback systems to minimize AI text-to-image generators magnifying stereotypes in professions such as surgery.

Publisher

American Medical Association (AMA)

Subject

Surgery

Reference22 articles.

1. What does DALL-E 2 know about radiology?;Adams;J Med Internet Res,2023

2. What could we make of AI in plastic surgery education.;Koljonen;J Plast Reconstr Aesthet Surg,2023

3. Gender shades: intersectional accuracy disparities in commercial gender classification.;Buolamwini;Proc Mach Learn Res,2018

4. Dissecting racial bias in an algorithm used to manage the health of populations.;Obermeyer;Science,2019

5. Letter: the urgency of neurosurgical leadership in the era of artificial intelligence.;Tang;Neurosurgery,2023

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3