Algorithmic Political Bias in Artificial Intelligence Systems
-
Published:2022-03-30
Issue:2
Volume:35
Page:
-
ISSN:2210-5433
-
Container-title:Philosophy & Technology
-
language:en
-
Short-container-title:Philos. Technol.
Abstract
AbstractSome artificial intelligence (AI) systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are (in a democratic society) strong social norms against gender and racial biases. This does not hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and racial biases even though they all can produce similar harm. Since some algorithms can now also easily identify people’s political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine.
Funder
Rheinische Friedrich-Wilhelms-Universität Bonn
Publisher
Springer Science and Business Media LLC
Subject
History and Philosophy of Science,Philosophy
Reference103 articles.
1. Abramowitz, S. I., Gomes, B., & Abramowitz, C. V. (1975). Publish or politic: Referee bias in manuscript review. Journal of Applied Social Psychology, 5(3), 187–200. 2. Amini, A., Soleimany, A.P., Schwarting, W., Bhatia, S.N., & Rus, D. (2019). Uncovering and mitigating algorithmic bias through learned latent structure. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 289–295. https://doi.org/10.1145/3306618.3314243 3. Aronson, E., & Cope, V. (1968). My enemy’s enemy is my friend. Journal of Personality and Social Psychology, 8(1, Pt. 1), 8–12. https://doi.org/10.1037/h0021234 4. Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. SSRN Scholarly Paper (Rochester, NY: Social Science Research Network. Available at https://papers.ssrn.com/abstract=2477899. Accessed 15 Sept 2020. 5. Bellamy, R. K. E., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., Nagar, S., Ramamurthy, K. N., Richards, J., Saha, D., Sattigeri, P., Singh, M., Varshney, K. R., & Zhang, Y. (2019). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. IBM Journal of Research and Development, 63, 4, 1–15. https://doi.org/10.1147/JRD.2019.2942287.
Cited by
32 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|