Dynamic Explainable Recommendation Based on Neural Attentive Models

Authors: Xu Chen, Yongfeng Zhang, Zheng Qin53-60

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to demonstrate the superiority of our model for improving recommendation performance. And to evaluate the explainability of our model, we first present examples to provide intuitive analysis on the highlighted review information, and then crowd-sourcing based evaluations are conducted to quantitatively verify our model s superiority. In this section, we evaluate our model by comparing it with several state-of-the-art models. We begin by introducing the experimental setup, and then report and analyze the experimental results.
Researcher Affiliation Academia Xu Chen TNList, School of Software Tsinghua University xu-ch14@mails.tsinghua.edu.cn Yongfeng Zhang Department of Computer Science Rutgers University yongfeng.zhang@rutgers.edu School of Software Tsinghua University qinzh@mail.tsinghua.edu.cn
Pseudocode No The paper describes computational rules and architectures in text and equations but does not include any explicitly labeled "Pseudocode" or "Algorithm" blocks.
Open Source Code No The paper mentions implementing a baseline (NARRE) based on "the authors public code7" with a GitHub link (https://github.com/THUIR/NARRE), but there is no statement or link indicating that the source code for the proposed DER model is publicly available.
Open Datasets Yes We use two publicly available datasets from different domains to evaluate our models, that is: Amazon5: This dataset contains user rating and review information for different products on www.amazon.com. ... Yelp6: This is a large-scale dataset including users rating and review behaviors for different restaurants. ... 5http://jmcauley.ucsd.edu/data/amazon 6https://www.kaggle.com/yelp-dataset/yelp-dataset/data
Dataset Splits Yes For each user behavior sequence, the last and second last interactions are used for testing and validation, while the other interactions are left for training.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, memory specifications, or types of computing resources used for the experiments.
Software Dependencies No The paper mentions using "Stanford Core NLP tool" and "Skipgram model" but does not provide specific version numbers for these or any other software dependencies, nor does it list programming language versions or specific library versions.
Experiment Setup Yes In our model, the batch size as well as the learning rate are determined in the range of {50, 100, 150} and {0.001, 0.01, 0.1, 1}, respectively. The user/item embedding size K is tuned in the range of {8, 16, 32, 64, 128}.