Epistemically violent biases in artificial intelligence design: the case of DALLE-E 2 and Starry AI

Author:

Mbalaka Blessing

Abstract

PurposeThe paper aims to expand on the works well documented by Joy Boulamwini and Ruha Benjamin by expanding their critique to the African continent. The research aims to assess if algorithmic biases are prevalent in DALL-E 2 and Starry AI. The aim is to help inform better artificial intelligence (AI) systems for future use.Design/methodology/approachThe paper utilised a desktop study for literature and gathered data from Open AI’s DALL-E 2 text-to-image generator and StarryAI text-to-image generator.FindingsThe DALL-E 2 significantly underperformed when it was tasked with generating images of “An African Family” as opposed to images of a “Family”. The pictures lacked any conceivable detail as compared to the latter of this comparison. The StarryAI significantly outperformed the DALL-E 2 and rendered visible faces. However, the accuracy of the culture portrayed was poor.Research limitations/implicationsBecause of the chosen research approach, the research results may lack generalisability. Therefore, researchers are encouraged to test the proposed propositions further. The implications, however, are that more inclusion is warranted to help address the issue of cultural inaccuracies noted in a few of the paper’s experiments.Practical implicationsThe paper is useful for advocates who advocate for algorithmic equality and fairness by highlighting evidence of the implications of systemic-induced algorithmic bias.Social implicationsThe reduction in offensive racism and more socially appropriate AI can be a better product for commercialisation and general use. If AI is trained on diversity, it can lead to better applications in contemporary society.Originality/valueThe paper’s use of DALL-E 2 and Starry AI is an under-researched area, and future studies on this matter are welcome.

Publisher

Emerald

Reference69 articles.

1. Data bias in artificial intelligence;Communications of the ACM,2019

2. Mapping movements and motivations: An autoethnographic analysis of racial, gendered, and epistemic violence in academia;Feminist Formations,2019

3. Assessing risk, automating racism;Science,2019

4. Conceptualising epistemic violence: An interdisciplinary assemblage for IR;International Politics Reviews,2021

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3