A nascent design theory for explainable intelligent systems
-
Published:2022-12
Issue:4
Volume:32
Page:2185-2205
-
ISSN:1019-6781
-
Container-title:Electronic Markets
-
language:en
-
Short-container-title:Electron Markets
Author:
Herm Lukas-ValentinORCID, Steinbach Theresa, Wanner JonasORCID, Janiesch ChristianORCID
Abstract
AbstractDue to computational advances in the past decades, so-called intelligent systems can learn from increasingly complex data, analyze situations, and support users in their decision-making to address them. However, in practice, the complexity of these intelligent systems renders the user hardly able to comprehend the inherent decision logic of the underlying machine learning model. As a result, the adoption of this technology, especially for high-stake scenarios, is hampered. In this context, explainable artificial intelligence offers numerous starting points for making the inherent logic explainable to people. While research manifests the necessity for incorporating explainable artificial intelligence into intelligent systems, there is still a lack of knowledge about how to socio-technically design these systems to address acceptance barriers among different user groups. In response, we have derived and evaluated a nascent design theory for explainable intelligent systems based on a structured literature review, two qualitative expert studies, a real-world use case application, and quantitative research. Our design theory includes design requirements, design principles, and design features covering the topics of global explainability, local explainability, personalized interface design, as well as psychological/emotional factors.
Funder
Bayerische Staatsministerium für Wirtschaft, Landesentwicklung und Energie Julius-Maximilians-Universität Würzburg
Publisher
Springer Science and Business Media LLC
Subject
Management of Technology and Innovation,Marketing,Computer Science Applications,Economics and Econometrics,Business and International Management
Reference128 articles.
1. Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. CHI Conference on Human Factors in Computing Systems, 582, pp. 1–18. https://doi.org/10.1145/3173574.3174156 2. Abedin, B., Meske, C., Junglas, I., Rabhi, F., & Motahari-Nezhad, H. R. (2022). Designing and managing human-AI interactions. Information Systems Frontiers, 1-7. https://doi.org/10.1007/s10796-022-10313-1 3. Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052 4. Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human-AI interaction. Proceedings of the 2019 CHI conference on human factors in computing systems (pp. 1–13). https://doi.org/10.1145/3290605.3300233 5. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., & Benjamins, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|