Abstract
AbstractThe increasing prevalence of Artificial Intelligence (AI) in safety-critical contexts such as air-traffic control leads to systems that are practical and efficient, and to some extent explainable to humans to be trusted and accepted. The present structured literature analysis examines $$n = 236$$ articles on the requirements for the explainability and acceptance of AI. Results include a comprehensive review of $$n = 48$$ articles on information people need to perceive an AI as explainable, the information needed to accept an AI, and representation and interaction methods promoting trust in an AI. Results indicate that the two main groups of users are developers who require information about the internal operations of the model and end users who require information about AI results or behavior. Users’ information needs vary in specificity, complexity, and urgency and must consider context, domain knowledge, and the user’s cognitive resources. The acceptance of AI systems depends on information about the system’s functions and performance, privacy and ethical considerations, as well as goal-supporting information tailored to individual preferences and information to establish trust in the system. Information about the system’s limitations and potential failures can increase acceptance and trust. Trusted interaction methods are human-like, including natural language, speech, text, and visual representations such as graphs, charts, and animations. Our results have significant implications for future human-centric AI systems being developed. Thus, they are suitable as input for further application-specific investigations of user needs.
Publisher
Springer Nature Switzerland
Reference120 articles.
1. Explaining Trained Neural Networks with Semantic Web Technologies: First Steps, July 2017 (2017). http://daselab.cs.wright.edu/nesy/NeSy17/
2. Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. NIPS 2018, Red Hook, NY, USA, pp. 9525–9536. Curran Associates Inc. (2018)
3. Ajzen, I.: The theory of planned behavior. Organ. Beh. Hum. Dec. Proc. 50(2), 179–211 (1991). https://doi.org/10.1016/0749-5978(91)90020-T
4. Alshammari, M., Nasraoui, O., Sanders, S.: Mining semantic knowledge graphs to add explainability to black box recommender systems. IEEE Access 7, 110563–110579 (2019). https://doi.org/10.1109/ACCESS.2019.2934633
5. American Psychological Association and others: APA dictionary of psychology online (2020)
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献