Explainable Recommendation through Attentive Multi-View Learning

Authors: Jingyue Gao, Xiting Wang, Yasha Wang, Xing Xie3622-3629

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our model outperforms state-of-the-art methods in terms of both accuracy and explainability.
Researcher Affiliation Collaboration 1Peking University, {gaojingyue1997, wangyasha}@pku.edu.cn 2Microsoft Research Asia, {xitwan, xingx}@microsoft.com
Pseudocode No The paper describes algorithms (e.g., dynamic programming) but does not present them in structured pseudocode blocks or labeled algorithm figures.
Open Source Code No Footnote 4 links to a CSV file of explanations, not the source code for the methodology described in the paper. No other explicit statement about open-sourcing the code is provided.
Open Datasets Yes We use three datasets from different domains for evaluation. Table 1 summarizes the statistics of the datasets. Toys and Games is the part of the Amazon dataset2 that focuses on Toys and Games. ... Digital Music is also from the Amazon 5-core dataset. ... Yelp consists of restaurant reviews from Yelp Challenge 20183. ... 2http://jmcauley.ucsd.edu/data/amazon 3https://www.yelp.com/dataset/challenge
Dataset Splits Yes We randomly split the dataset into training (70%), validation (15%) and test (15%) sets.
Hardware Specification No The paper does not specify the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using the "Adam optimizer (Kingma and Ba 2014)" but does not provide version numbers for any software dependencies, libraries, or programming languages used.
Experiment Setup Yes The number of latent factors k for algorithms is searched in [8,16,32,64,128]. After parameter tuning, we set k = 8 for NMF, PMF and HFT, and k = 16 for SVD++. We set k = 32 for EFM, CKE, Deep Co NN, NARRE and DEAML. We set d1, d2, d3, λv, and λa to 20, 10, 10, 10.0, and 3.0, respectively.