LGMRec: Local and Global Graph Learning for Multimodal Recommendation
Authors: Zhiqiang Guo, Jianjun Li, Guohui Li, Chaoyang Wang, Si Shi, Bin Ruan
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on three benchmark datasets demonstrate the superiority of our LGMRec over various state-of-the-art recommendation baselines, showcasing its effectiveness in modeling both local and global user interests. |
| Researcher Affiliation | Academia | 1 School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China 2 School of Software Engineering, Huazhong University of Science and Technology, Wuhan, China 3 Wuhan Digital Engineering Institute, Wuhan, China 4 Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen, China {zhiqiangguo, jianjunli, guohuili}@hust.edu.cn, sunwardtree@outlook.com, shisi@gml.ac.cn, binruan0227@gmail.com |
| Pseudocode | No | The paper describes the methodology using mathematical equations and textual descriptions but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | We implement LGMRec2 with MMRec (Zhou 2023). 2https://github.com/georgeguo-cn/LGMRec |
| Open Datasets | Yes | To evaluate our proposed model, we conduct comprehensive experiments on three widely used Amazon datasets (Mc Auley et al. 2015): Baby, Sports and Outdoors, Clothing Shoes and Jewelry. We refer to them as Baby, Sports, Clothing for brief. |
| Dataset Splits | Yes | For each dataset, we randomly split historical interactions into training, validation, and testing sets with 8 : 1 : 1 ratio. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU/CPU models or memory. |
| Software Dependencies | No | The paper mentions implementing LGMRec with MMRec (Zhou 2023), but does not specify other software dependencies like Python version, PyTorch/TensorFlow versions, or other libraries with their version numbers. |
| Experiment Setup | Yes | For a fair comparison, we optimize all models with the default batch size 2048, learning rate 0.001, and embedding size d = 64. For all graph-based methods, the number L of collaborative graph prorogation layers is set to 2. In addition, we initialize the model parameters with the Xavier method (Glorot and Bengio 2010). For our model, the optimal hyper-parameters are determined via grid search on the validation set. Specifically, the number of modal graph embedding layers and hypergraph embedding layers (K and H) are tuned in {1, 2, 3, 4}. The number A of hyperedge is searched in {1, 2, 4, 8, 16, 32, 64, 128, 256}. The dropout ratio ρ and the adjust factor α are tuned in {0.1, 0.2, . . . , 1.0}. We search both the adjust weight λ2 of contrastive loss and the regularization coefficient λ1 in {1e 6, 1e 5, . . . , 0.1}. The early stop mechanism is adopted, i.e., the training will stop when R@20 on the verification set does not increase for 20 successive epochs. |