Memory Augmented Graph Neural Networks for Sequential Recommendation

Authors: Chen Ma, Liheng Ma, Yingxue Zhang, Jianing Sun, Xue Liu, Mark Coates5045-5052

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively evaluate our model on five real-world datasets, comparing with several state-of-the-art methods and using a variety of performance metrics. The experimental results demonstrate the effectiveness of our model for the task of Top-K sequential recommendation.
Researcher Affiliation Collaboration Chen Ma, 1 Liheng Ma, 1,3 Yingxue Zhang,2 Jianing Sun,2 Xue Liu,1 Mark Coates1 1Mc Gill University, 2Huawei Noah s Ark Lab in Montreal, 3Mila
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes The proposed model is evaluated on five real-world datasets from various domains with different sparsities: Movie Lens20M (Harper and Konstan 2016), Amazon-Books and Amazon-CDs (He and Mc Auley 2016b), Goodreads Children and Goodreads-Comics (Wan and Mc Auley 2018).
Dataset Splits Yes For each user, we use the earliest 70% of the interactions in the user sequence as the training set and use the next 10% of interactions as the validation set for hyper-parameter tuning. The remaining 20% constitutes the test set for reporting model performance.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup Yes In the experiments, the latent dimension of all the models is set to 50. For GRU4Rec and GRU4Rec+, we find that a learning rate of 0.001 and batch size of 50 can achieve good performance. ... For MA-GNN, we follow the same setting in Caser to set |L| = 5 and |T| = 3. ... The embedding size d is also set to 50. The value of h and m are selected from {5, 10, 15, 20}. The learning rate and λ are set to 0.001 and 0.001, respectively. The batch size is set to 4096.