DELF: A Dual-Embedding based Deep Latent Factor Model for Recommendation

Authors: Weiyu Cheng, Yanyan Shen, Yanmin Zhu, Linpeng Huang

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted extensive experiments on real-world datasets. The results verify the effectiveness of user/item dual embeddings and the superior performance of DELF on item recommendation.In this section, we conduct experiments with the aim of answering the following research questions: RQ1 How does our approach perform compared with the state-of-the-art recommendation methods? RQ2 Are the key components in DELF (i.e., attentive module, pairwise neural interaction layers) useful for improving recommendation results? RQ3 How dose the performance of DELF vary with different values of the hyper-parameters?
Researcher Affiliation Academia Weiyu Cheng, Yanyan Shen , Yanmin Zhu, Linpeng Huang Department of Computer Science and Engineering Shanghai Jiao Tong University {weiyu cheng, shenyy, yzhu, lphuang}@sjtu.edu.cn
Pseudocode No The paper describes its model and learning process using mathematical equations and textual descriptions, but it does not include a dedicated 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide any links to open-source code nor does it explicitly state that the code for their methodology is publicly available.
Open Datasets Yes We conducted experiments using two public datasets: Movielens 1M1 and Amazon Music2. 1https://grouplens.org/datasets/movielens/1m/ 2http://jmcauley.ucsd.edu/data/amazon/
Dataset Splits Yes To evaluate the recommendation performance, we employed the widely used leave-one-out evaluation [Rendle et al., 2009; He et al., 2017; Bayer et al., 2017]. We held-out the latest interaction of each user as the test set, and collected the second latest interactions as the validation set. The remaining data were used for training.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used to run the experiments.
Software Dependencies No The paper mentions 'We implemented our proposed methods based on TensorFlow' but does not specify any version numbers for TensorFlow or other software dependencies, which prevents reproducibility.
Experiment Setup Yes We sampled four negative instances per positive instance. We used the batch size of 256 and the learning rate of 0.001. The size of the last hidden layer is termed as predictive factors [He et al., 2017] and we evaluated the factors in {8, 16, 32, 64}. We employed three hidden layers for each feedforward network.