Affiliation:
1. University of Duisburg-Essen, Duisburg, Germany
Abstract
Explaining system-generated recommendations based on user reviews can foster users’ understanding and assessment of the recommended items and the recommender system (RS) as a whole. While up to now explanations have mostly been static, shown in a single presentation unit, some interactive explanatory approaches have emerged in explainable artificial intelligence (XAI), making it easier for users to examine system decisions and to explore arguments according to their information needs. However, little is known about how interactive interfaces should be conceptualized and designed to meet the explanatory aims of transparency, effectiveness, and trust in RS. Thus, we investigate the potential of interactive, conversational explanations in review-based RS and propose an explanation approach inspired by dialog models and formal argument structures. In particular, we investigate users’ perception of two different interface types for presenting explanations, a graphical user interface (GUI)-based dialog consisting of a sequence of explanatory steps, and a chatbot-like natural-language interface.
Since providing explanations by means of natural language conversation is a novel approach, there is a lack of understanding how users would formulate their questions with a corresponding lack of datasets. We thus propose an intent model for explanatory queries and describe the development of ConvEx-DS, a dataset containing intent annotations of 1,806 user questions in the domain of hotels, that can be used to to train intent detection methods as part of the development of conversational agents for explainable RS. We validate the model by measuring user-perceived helpfulness of answers given based on the implemented intent detection. Finally, we report on a user study investigating users’ evaluation of the two types of interactive explanations proposed (GUI and chatbot), and to test the effect of varying degrees of interactivity that result in greater or lesser access to explanatory information. By using Structural Equation Modeling, we reveal details on the relationships between the perceived quality of an explanation and the explanatory objectives of transparency, trust, and effectiveness. Our results show that providing interactive options for scrutinizing explanatory arguments has a significant positive influence on the evaluation by users (compared to low interactive alternatives). Results also suggest that user characteristics such as decision-making style may have a significant influence on the evaluation of different types of interactive explanation interfaces.
Funder
German Research Foundation
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Human-Computer Interaction
Reference124 articles.
1. Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’18). 1–18.
2. Formalizing explanatory dialogues;Arioua Abdallah;Scal. Uncert. Manage.,2015
3. On the Explanation of SameAs Statements Using Argumentation
4. Roland Bader, Wolfgang Woerndl, Andreas Karitnig, and Gerhard Leitner. 2012. Designing an explanation interface for proactive recommendations in automotive scenarios. In Proceedings of the 19th International Conference on User Modeling, Adaptation, and Personalization (UMAP’11). 92–104.
5. Konstantin Bauman, Bing Liu, and Alexander Tuzhilin. 2017. Aspect-based recommendations: Recommending items with the most valuable aspects based on user reviews. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 717–725.
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献