Affiliation:
1. Industrial and Systems Engineering Department, De La Salle University—Manila, 2401 Taft Ave, Malate, Manila 1004, Philippines
Abstract
Explainable Artificial Intelligence (XAI) has successfully solved the black box paradox of Artificial Intelligence (AI). By providing human-level insights on AI, it allowed users to understand its inner workings even with limited knowledge of the machine learning algorithms it uses. As a result, the field grew, and development flourished. However, concerns have been expressed that the techniques are limited in terms of to whom they are applicable and how their effect can be leveraged. Currently, most XAI techniques have been designed by developers. Though needed and valuable, XAI is more critical for an end-user, considering transparency cleaves on trust and adoption. This study aims to understand and conceptualize an end-user-centric XAI to fill in the lack of end-user understanding. Considering recent findings of related studies, this study focuses on design conceptualization and affective analysis. Data from 202 participants were collected from an online survey to identify the vital XAI design components and testbed experimentation to explore the affective and trust change per design configuration. The results show that affective is a viable trust calibration route for XAI. In terms of design, explanation form, communication style, and presence of supplementary information are the components users look for in an effective XAI. Lastly, anxiety about AI, incidental emotion, perceived AI reliability, and experience using the system are significant moderators of the trust calibration process for an end-user.
Funder
Department of Science and Technology
Subject
Computer Networks and Communications,Human-Computer Interaction,Communication
Reference90 articles.
1. Artificial Intelligence: A Survey on Evolution, Models, Applications and Future Trends;Lu;J. Manag. Anal.,2019
2. Machine Learning: Trends, Perspectives, and Prospects;Jordan;Science,2015
3. Explainable AI: From Black Box to Glass Box;Rai;J. Acad. Mark. Sci.,2020
4. Doshi-Velez, F., and Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. arXiv.
5. Can We Open the Black Box of AI?;Castelvecchi;Nat. News,2016
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献