Propagation of societal gender inequality by internet search algorithms

Author:

Vlasceanu Madalina1ORCID,Amodio David M.12ORCID

Affiliation:

1. Department of Psychology, New York University, New York, NY 10003

2. Department of Psychology, University of Amsterdam, 1001 NK Amsterdam, The Netherlands

Abstract

Humans increasingly rely on artificial intelligence (AI) for efficient and objective decision-making, yet there is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded. As a consequence, their use by human decision makers may result in the propagation, rather than reduction, of existing disparities. To assess this hypothesis empirically, we tested the relation between societal gender inequality and algorithmic search output and then examined the effect of this output on human decision-making. First, in two multinational samples ( n = 37, 52 countries), we found that greater nation-level gender inequality was associated with more male-dominated Google image search results for the gender-neutral keyword “person” (in a nation’s dominant language), revealing a link between societal-level disparities and algorithmic output. Next, in a series of experiments with human participants ( n = 395), we demonstrated that the gender disparity associated with high- vs. low-inequality algorithmic outputs guided the formation of gender-biased prototypes and influenced hiring decisions in novel scenarios. These findings support the hypothesis that societal-level gender inequality is recapitulated in internet search algorithms, which in turn can influence human decision makers to act in ways that reinforce these disparities.

Funder

NYU Alliance For Public Interest Technology

The Netherlands Organisation for Scientific Research

Publisher

Proceedings of the National Academy of Sciences

Subject

Multidisciplinary

Reference51 articles.

1. Face recognition technology: security versus privacy

2. J. Larson S. Mattu L. Kirchner J. Angwin How we analyzed the COMPAS recidivism algorithm. ProPublica (2016). https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm. Accessed 23 June 2021.

3. Dissecting racial bias in an algorithm used to manage the health of populations

4. Unfair Treatment? The Case of Freedle, the SAT, and the Standardization Approach to Differential Item Functioning

5. The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections

Cited by 33 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3