Affiliation:
1. Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
Abstract
Providing interpretable explanations can notably enhance users’ confidence and satisfaction with regard to recommender systems. Counterfactual explanations demonstrate remarkable performance in the realm of explainable sequential recommendation. However, current counterfactual explanation models designed for sequential recommendation overlook the temporal dependencies in a user’s past behavior sequence. Furthermore, counterfactual histories should be as similar to the real history as possible to avoid conflicting with the user’s genuine behavioral preferences. This paper presents counterfactual explanations by Considering temporal dependencies (CETD), a counterfactual explanation model that utilizes a variational autoencoder (VAE) for sequential recommendation and takes into account temporal dependencies. To improve explainability, CETD employs a recurrent neural network (RNN) when generating counterfactual histories, thereby capturing both the user’s long-term preferences and short-term behavior in their real behavioral history. Meanwhile, CETD fits the distribution of reconstructed data (i.e., the counterfactual sequences generated by VAE perturbation) in a latent space, and leverages learned variance to decrease the proximity of counterfactual histories by minimizing the distance between the counterfactual sequences and the original sequence. Thorough experiments conducted on two real-world datasets demonstrate that the proposed CETD consistently surpasses current state-of-the-art methods.
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science