Co-Attentive Multi-Task Learning for Explainable Recommendation

Authors: Zhongxia Chen, Xiting Wang, Xing Xie, Tong Wu, Guoqing Bu, Yining Wang, Enhong Chen

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on three public datasets demonstrate the effectiveness of our model. In the rest of the paper, we first introduce the problem definition and our model. We then describe the experiments and conclusion.
Researcher Affiliation Collaboration 1School of Computer Science and Technology, University of Science and Technology of China, China 2Microsoft Research Asia, China 3CFETS Information Technology (Shanghai) Co., Ltd., China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Source code: https://github.com/3878anonymous/CAML.
Open Datasets Yes Three publicly available datasets from different domains are used in our evaluation: Electronics is the part of Amazon dataset3 that focuses on electronic products. We use the 5-core version where users and items have no fewer than 5 reviews. Movies&TV is also from the Amazon 5-core dataset. This dataset focuses on movies and TVs. Yelp is a larger and much sparser dataset which contains restaurant reviews from Yelp Challenge 20164. 3http://jmcauley.ucsd.edu/data/amazon/, 4https://www.yelp.com/dataset/challenge
Dataset Splits Yes We randomly choose 80% of samples as the training data, 10% as validation and 10% for testing on each dataset.
Hardware Specification Yes The models are trained on NVIDIA Tesla P100.
Software Dependencies No The paper mentions 'Tensorflow' but does not specify a version number or other software dependencies with version numbers.
Experiment Setup Yes The initial learning rate is set to 10^-3. The number of pointers is tested in [1, 2, 3, 4, 5] for all datasets. For model regularization, the dropout rate is set to 0.2 and the L2 regularization is a fixed number 10^-6. We set the rating prediction and the explanation generation as the same weight 1.0, and tune λc among [0.01, 0.05, 0.1, 0.2, 0.5].