Abstract
While artificial intelligence (AI) is often presented as a neutral tool, growing evidence suggests that it exacerbates gender, racial, and other biases leading to discrimination and marginalization. This study analyzes the emerging agenda on intersectionality in AI. It examines four high‐profile reports dedicated to this topic to interrogate how they frame problems and outline recommendations to address inequalities. These four reports play an important role in putting problematic intersectionality issues on the political agenda of AI, which is typically dominated by questions about AI’s potential social and economic benefits. The documents highlight the systemic nature of problems that operate like a negative feedback loop or vicious cycle with the diversity crisis in the AI workforce leading to the development of biased AI tools when a largely homogenous group of white male developers and tech founders build their own biases into AI systems. Typical examples include gender and racial biases embedded into voice assistants, humanoid robots, and hiring tools. The reports frame the diversity situation in AI as alarming, highlight that previous diversity initiatives have not worked, emphasize urgency, and call for a holistic approach that focuses not just on numbers but rather on culture, power, and opportunities to exert influence. While dedicated reports on intersectionality in AI provide a lot of depth, detail, and nuance on the topic, in the patriarchal system they are in danger of being pigeonholed as issues of relevance mainly for women and minorities rather than part of the core agenda.
Reference49 articles.
1. Allhutter, D., Cech, F., Fischer, F., Grill, G., & Mager, A. (2020). Algorithmic profiling of job seekers in Austria: How austerity politics are made effective. Frontiers in Big Data, 3, Article 5. https://doi.org/10.3389/fdata.2020.00005
2. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
3. Bacchi, C. (2000). Policy as discourse: What does it mean? Where does it get us? Discourse: Studies in the Cultural Politics of Education, 21(1), 45–57.
4. Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Polity.
5. Bentley, C., Muyoya, C., Vannini, S., Oman, S., & Jimenez, A. (2023). Intersectional approaches to data: The importance of an articulation mindset for intersectional data science. Big Data & Society, 10(2). https://doi.org/10.1177/20539517231203667
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献