Graph Contextualized Self-Attention Network for Session-based Recommendation
Authors: Chengfeng Xu, Pengpeng Zhao, Yanchi Liu, Victor S. Sheng, Jiajie Xu, Fuzhen Zhuang, Junhua Fang, Xiaofang Zhou
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two real-world datasets show that GC-SAN outperforms state-of-the-art methods consistently. |
| Researcher Affiliation | Academia | 1Institute of AI, School of Computer Science and Technology, Soochow University, China 2Zhejiang Lab, China 3Key Lab of IIP of CAS, Institute of Computing Technology, Beijing, China 4Rutgers University, New Jersey, USA 5The University of Central Arkansas, Conway, USA 6The University of Queensland, Brisbane, Australia |
| Pseudocode | No | The paper presents equations and architectural diagrams but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides GitHub links for baseline models (e.g., GRU4Rec, SR-GNN) but does not include any statement or link indicating that the source code for GC-SAN is publicly available. |
| Open Datasets | Yes | We study the effectiveness of our proposed approach GC-SAN on two real-world datasets, i.e., Diginetica1 and Retailrocket2. 1http://cikm2016.cs.iupui.edu/cikm-cup/ 2https://www.kaggle.com/retailrocket/ecommerce-dataset |
| Dataset Splits | No | Furthermore, for session-based recommendation, we set the sessions data of last week as the test data, and the remaining for training. (This statement only describes a train/test split, without explicit mention of a validation split or how it's derived if used internally). |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud resources) used for running the experiments. |
| Software Dependencies | No | The paper does not explicitly provide specific version numbers for software dependencies or libraries used in the experiments. |
| Experiment Setup | No | The paper discusses the impact of hyperparameters such as the weight factor ω, the number of self-attention blocks k, and embedding size d, and mentions Dropout regularization. However, it does not provide specific concrete values for hyperparameters like learning rate, batch size, specific dropout rates, or optimizer settings used in the main experiments. |