Abstract
AbstractArtificial Intelligence (AI) systems are increasingly pervasive: Internet of Things, in-car intelligent devices, robots, and virtual assistants, and their large-scale adoption makes it necessary to explain their behaviour, for example to their users who are impacted by their decisions, or to their developers who need to ensure their functionality. This requires, on the one hand, to obtain an accurate representation of the chain of events that caused the system to behave in a certain way (e.g., to make a specific decision). On the other hand, this causal chain needs to be communicated to the users depending on their needs and expectations. In this phase of explanation delivery, allowing interaction between user and model has the potential to improve both model quality and user experience. The XAINES project investigates the explanation of AI systems through narratives targeted to the needs of a specific audience, focusing on two important aspects that are crucial for enabling successful explanation: generating and selecting appropriate explanation content, i.e. the information to be contained in the explanation, and delivering this information to the user in an appropriate way. In this article, we present the project’s roadmap towards enabling the explanation of AI with narratives.
Funder
Bundesministerium für Bildung und Forschung
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI)
Publisher
Springer Science and Business Media LLC
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. RIXA - Explaining Artificial Intelligence in Natural Language;2023 IEEE International Conference on Data Mining Workshops (ICDMW);2023-12-04
2. Evaluation Metrics for XAI: A Review, Taxonomy, and Practical Applications;2023 IEEE 27th International Conference on Intelligent Engineering Systems (INES);2023-07-26
3. Explainable AI;KI - Künstliche Intelligenz;2022-12