Abstract
AbstractA plethora of research has shed light on AI’s perpetuation of biases, and the primary focus has been on technological fixes or biased data. However, there is deafening silence regarding the key role of programmers in mitigating bias in AI. A significant gap exists in the understanding of how a programmer’s personal characteristics may influence their professional design choices. This study addresses this gap by exploring the link between programmers’ sense of social responsibility and their moral imagination in AI, i.e., intentions to correct bias in AI, particularly against marginalized populations. Furthermore, it is unexplored how a programmer’s preference for hierarchy between groups, social dominance orientation-egalitarianism (SDO-E), influences this relationship. We conducted a between-subject online experiment with 263 programmers based in the United States. They were randomly assigned to conditions that mimic narratives about agency reflected in technology determinism (low responsibility) and technology instrumentalism (high responsibility). The findings reveal that high social responsibility significantly boosts programmers’ moral imagination concerning their intentions to correct bias in AI, and it is especially effective for high SDO-E programmers. In contrast, low SDO-E programmers exhibit consistently high levels of moral imagination in AI, regardless of the condition, as they are highly empathetic, allowing the perspective-taking needed for moral imagination, and are naturally motivated to equalize groups. This study underscores the need to cultivate social responsibility among programmers to enhance fairness and ethics in the development of artificial intelligence. The findings have important theoretical and practical implications for AI ethics, algorithmic fairness, etc.
Publisher
Springer Science and Business Media LLC
Reference97 articles.
1. Algorithm Watch.: AI Ethics Guidelines Global Inventory. (2023). https://algorithmwatch.org/en/ai-ethics-guidelines-global-inventory/. Accessed 16 Sept 2023
2. Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine Bias: There’s Software Used across the Country to Predict Future Criminals. And It’s Biased Against Blacks. ProPublica, USA (2016)
3. Araujo, T., Helberger, N., Kruikemeier, S., de Vreese, C.H.: In AI we trust? perceptions about automated decision-making by artificial intelligence. AI Soc. (2020). https://doi.org/10.1007/s00146-019-00931-w
4. Bartlett, R., Morse, A., Stanton, R., Wallace, N., Puri, M., Rau, R., Seru, A., Walther, A., Wolfers, J.: Consumer-lending discrimination in the fintech Era (25943). National Bureau of Economic Research. (2019). http://www.nber.org/papers/w25943. Accessed 16 Sept 2023
5. Batson, C.D.: Self-other merging and the empathy-altruism hypothesis: reply to Neuberg et al. (1997). J. Person. Soc. Psychol. (1997). https://doi.org/10.1037/0022-3514.73.3.517