STAR-GCN: Stacked and Reconstructed Graph Convolutional Networks for Recommender Systems

Authors: Jiani Zhang, Xingjian Shi, Shenglin Zhao, Irwin King

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on multiple rating prediction benchmarks demonstrate our model achieves state-of-the-art performance in four out of five real-world datasets and significant improvements in predicting ratings in the cold start scenario.
Researcher Affiliation Collaboration 1 The Chinese University of Hong Kong, Hong Kong, China 2 Hong Kong University of Science and Technology, Hong Kong, China 3 Youtu Lab, Tencent, Shenzhen, China
Pseudocode No The paper describes the architecture and methods in text and diagrams, but does not include a formal pseudocode block or algorithm.
Open Source Code Yes The code implementation is available in https://github.com/jennyzhang0215/STAR-GCN.
Open Datasets Yes We conduct extensive experiments on five popular recommendation benchmarks for the transductive and inductive rating prediction tasks. The datasets are summarized in Table 1. Flixster and Douban are preprocessed and provided by Monti et al. [2017]. The Movie Lens1 datasets contain different scales of rating pairs... Movielens [Harper and Konstan, 2016]: https://grouplens.org/ datasets/movielens/
Dataset Splits Yes We train the STAR-GCN models with Adam [Kingma and Ba, 2015] optimizer and use the validation set to perform learning rate decay scheduler.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided.
Software Dependencies No The paper mentions software components like Adam optimizer and Leaky ReLU activation, but does not provide specific version numbers for any libraries or frameworks used (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes In all models, we choose the non-linear function α( ) as a Leaky Re LU activation with the negative slope equals to 0.1. For the input vectors, we set the dimension of node embeddings de to be 32 for small datasets and 64 for large datasets... The initial learning rate is set to be 0.002 and gradually decreases to be 0.0005... The training batch size is fixed to be 10K for small datasets, 100K for ML-1M, and 500K for ML-10M.