Affiliation:
1. Tsinghua University, China
Abstract
Providing reasonable explanations for a specific suggestion given by the recommender can help users trust the system more. As logic rule-based inference is concise, transparent, and aligned with human cognition, it can be adopted to improve the interpretability of recommendation models. Previous work that interprets user preference with logic rules merely focuses on the construction of rules while neglecting the usage of feature embeddings. This limits the model in capturing implicit relationships between features. In this paper, we aim to improve both the effectiveness and explainability of recommendation models by simultaneously representing logic rules and feature embeddings. We propose a novel model-intrinsic explainable recommendation method named Feature-Enhanced Neural Collaborative Reasoning (
FENCR
). The model automatically extracts representative logic rules from massive possibilities in a data-driven way. In addition, we utilize feature interaction-based neural modules to represent logic operators on embeddings. Experiments on two large public datasets show our model outperforms state-of-the-art neural logical recommendation models. Further case analyses demonstrate that FENCR can derive reasonable rules, indicating its high robustness and expandability
1
.
Publisher
Association for Computing Machinery (ACM)