Abstract
AbstractFollowing the studies of Araujo et al. (AI Soc 35:611–623, 2020) and Lee (Big Data Soc 5:1–16, 2018), this empirical study uses two scenario-based online experiments. The sample consists of 221 subjects from Germany, differing in both age and gender. The original studies are not replicated one-to-one. New scenarios are constructed as realistically as possible and focused on everyday work situations. They are based on the AI acceptance model of Scheuer (Grundlagen intelligenter KI-Assistenten und deren vertrauensvolle Nutzung. Springer, Wiesbaden, 2020) and are extended by individual descriptive elements of AI systems in comparison to the original studies. The first online experiment examines decisions made by artificial intelligence with varying degrees of impact. In the high-impact scenario, applicants are automatically selected for a job and immediately received an employment contract. In the low-impact scenario, three applicants are automatically invited for another interview. In addition, the relationship between age and risk perception is investigated. The second online experiment tests subjects’ perceived trust in decisions made by artificial intelligence, either semi-automatically through the assistance of human experts or fully automatically in comparison. Two task types are distinguished. The task type that requires “human skills”—represented as a performance evaluation situation—and the task type that requires “mechanical skills”—represented as a work distribution situation. In addition, the extent of negative emotions in automated decisions is investigated. The results are related to the findings of Araujo et al. (AI Soc 35:611–623, 2020) and Lee (Big Data Soc 5:1–16, 2018). Implications for further research activities and practical relevance are discussed.
Funder
Hochschule Fresenius online plus GmbH
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Human-Computer Interaction,Philosophy
Reference35 articles.
1. Araujo T, Helberger N, Kruikemeier S, de Vreese CH (2020) In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Soc 35(3):611–623. https://doi.org/10.1007/s00146-019-00931-w
2. Barocas S, Selbst AD (2016) Big data’s disparate impact. California Law Rev 104(3):671–732. https://doi.org/10.15779/Z38BG31
3. Bickmore T, Utami D, Matsuyama R, Paasche-Orlow MK (2016) Improving access to online health information with conversational agents: a randomized controlled experiment. J Med Internet Res. https://doi.org/10.2196/jmir.5239
4. Brandenburg S, Backhaus N (2015) Zur Entwicklung einer deutschen Version der modified Differential Emotions Scale (mDES). In: Wienrich C, Zander T, Gramann K (eds) 11 Berliner Werkstatt Mensch-Maschine-systeme: tagungsband. Universitätsverlag der TU Berlin, Berlin, pp 63–67
5. Brosch T, Pourtois G, Sander D (2010) The perception and categorization of emotional stimuli: a review. Psychology Press, pp 76–108. https://doi.org/10.1080/02699930902975754
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献