Abstract
AbstractAutomated decision-making is becoming increasingly common in the public sector. As a result, political institutions recommend the presence of humans in these decision-making processes as a safeguard against potentially erroneous or biased algorithmic decisions. However, the scientific literature on human-in-the-loop performance is not conclusive about the benefits and risks of such human presence, nor does it clarify which aspects of this human–computer interaction may influence the final decision. In two experiments, we simulate an automated decision-making process in which participants judge multiple defendants in relation to various crimes, and we manipulate the time in which participants receive support from a supposed automated system with Artificial Intelligence (before or after they make their judgments). Our results show that human judgment is affected when participants receive incorrect algorithmic support, particularly when they receive it before providing their own judgment, resulting in reduced accuracy. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/b6p4z/ Experiment 2 was preregistered.
Funder
Ministerio de Ciencia e Innovación
Eusko Jaurlaritza
Publisher
Springer Science and Business Media LLC
Reference54 articles.
1. Agudo, U., & Matute, H. (2021). The influence of algorithms on political and dating decisions. PLoS ONE, 16(4), e0249454. https://doi.org/10.1371/journal.pone.0249454
2. Alon-Barkat, S., & Busuioc, M. (2022). Human-AI interactions in public sector decision-making: ‘Automation Bias’ and ‘Selective Adherence’ to algorithmic advice. Journal of Public Administration Research and Theory. https://doi.org/10.1093/JOPART/MUAC007
3. Álvarez, M., Martínez, N., Agudo, U., & Matute, H. (2023). ForenPsy 1.0. Retrieved from https://osf.io/detn4/
4. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
5. Araujo, T., Helberger, N., Kruikemeier, S., de Vreese, C. H., & de Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society, 35(3), 1–13. https://doi.org/10.1007/S00146-019-00931-w
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献