Learning Disentangled Representations for Recommendation
Authors: Jianxin Ma, Chang Zhou, Peng Cui, Hongxia Yang, Wenwu Zhu
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results show that our approach can achieve substantial improvement over the state-of-the-art baselines. We conduct our experiments on five real-world datasets. |
| Researcher Affiliation | Collaboration | Jianxin Ma1,2 , Chang Zhou1 , Peng Cui2, Hongxia Yang1, Wenwu Zhu2 1Alibaba Group, 2Tsinghua University |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | The dataset and our code are at https://jianxinma.github.io/disentangle-recsys.html. |
| Open Datasets | Yes | We conduct our experiments on five real-world datasets. Specifically, we use the largescale Netflix Prize dataset [4], and three Movie Lens datasets of different scales (i.e., ML-100k, ML-1M, and ML-20M) [16]. We additionally collect a dataset, named Ali Shop-7C 2, from Alibaba s e-commerce platform Taobao. 2The dataset and our code are at https://jianxinma.github.io/disentangle-recsys.html. |
| Dataset Splits | No | The paper states 'We follow the experiment protocol established by the previous work [32] strictly, and use the same preprocessing procedure as well as evaluation metrics.' but does not explicitly provide the specific train/validation/test dataset splits (e.g., percentages or counts) within the paper itself. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, memory, or specific computing environments used for experiments. |
| Software Dependencies | No | The paper mentions 'Adam' as an optimizer and 'Hyepropt' for hyperparameter tuning, but no specific version numbers are provided for these or any other software dependencies. |
| Experiment Setup | Yes | We constrain the number of learnable parameters to be around 2Md for each method so as to ensure fair comparison... We set d = 100 unless otherwise specified. We fix τ to 0.1. We tune the other hyper-parameters of both our approach s and our baselines automatically using the TPE method [6] implemented by Hyepropt [5]. |