Explainable Recommendation via Interpretable Feature Mapping and Evaluation of Explainability

Authors: Deng Pan, Xiangrui Li, Xin Li, Dongxiao Zhu

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We design and perform experiments to demonstrate two advantages of our AMCF approach: 1) comparable rating predictions; 2) good explanations on why a user likes/dislikes an item. To demonstrate the first advantage we compare the rating prediction performance with baseline approaches of rat- ing prediction only methods. The demonstration of the second advantage, however, is not a trivial task since currently no gold standard for evaluating explanation of recommendations except for using real customer feedback[Chen et al., 2019b; Gao et al., 2019]. Hence it s necessary to develop new schemes to evaluate the quality of explainability for both general and specific user preferences.
Researcher Affiliation Academia Deng Pan , Xiangrui Li , Xin Li and Dongxiao Zhu Department of Computer Science Wayne State University, USA {pan.deng, xiangruili, xinlee, dzhu}@wayne.edu
Pseudocode No The paper describes its methods in prose and through figures, but it does not include explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Code is available from https://github.com/pd90506/AMCF.
Open Datasets Yes Movie Lens Datasets. This data set [Harper and Konstan, 2016] offers very complete movie genre information, which provides a perfect foundation for genre (aspect) preference prediction, i.e. determining which genre a user likes most.
Dataset Splits No The paper mentions data pre-processing and evaluation but does not specify the exact percentages or counts for training, validation, and test splits, nor does it refer to standard predefined splits with citations.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper mentions using SVD, NCF, or FM as base models but does not provide specific version numbers for any software dependencies or libraries used in their implementation.
Experiment Setup Yes The regularization tuning parameter λ is set to 0.05, which demonstrated better performance compared to other selections. It is worth noting that the tuning parameters of the base model of our AMCF approach are directly inherited from the corresponding non-interpretable model. In terms of robustness, we set the dimension of latent factors in the base models to 20, 80, and 120.