Federated Graph Learning for Cross-Domain Recommendation
Authors: Ziqi Yang, Zhaopeng Peng, Zihui Wang, Jianzhong Qi, Chaochao Chen, Weike Pan, Chenglu Wen, Cheng Wang, Xiaoliang Fan
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on 16 popular domains of the Amazon dataset, demonstrating that Fed GCDR significantly outperforms state-of-the-art methods. |
| Researcher Affiliation | Academia | 1Fujian Key Laboratory of Sensing and Computing for Smart Cities, Xiamen University, China 2Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, China 3School of Computing and Information Systems, The University of Melbourne, Australia 4College of Computer Science and Technology, Zhejiang University Hangzhou, China 5College of Computer Science and Software Engineering, Shenzhen University Shenzhen, China |
| Pseudocode | No | The paper describes the methodology and framework in detail but does not include any pseudocode blocks or algorithms labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | We open source the code at https://github.com/Lafin Hana/Fed GCDR. |
| Open Datasets | Yes | We study the effectiveness of Fed GCDR with 16 popular domains of a real-world dataset Amazon [52]. The Amazon dataset we used is the 2018 version and can be easily accessed in https://cseweb. ucsd.edu/~jmcauley/datasets/amazon_v2/. |
| Dataset Splits | No | To evaluate the recommendation performance, we use the leave-one-out method which is widely used in recommender systems [51]. Specifically, we held out the latest interaction as the test set and utilized the remaining data for training. |
| Hardware Specification | Yes | We conduct all the experiments on NVIDIA 3090 GPUs. |
| Software Dependencies | No | The paper mentions using Adam as the optimizer and setting batch sizes and learning rates, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions). |
| Experiment Setup | Yes | We set batch size = 256 and latent dim = 8 for all domains. The number of propagation layer of GAT-base federated model is set to 2. The MLP has two hidden layers with size={16, 4}.Considering the trade-off between recommendation performance and privacy preservation, we set ϵ to 8 and σ to 10 5. We set α=0.01 and β=0.01 which are the two Writefull of the objective function LGAT ( ). When training our models, we choose Adam as the optimizer, and set the learning rate to 0.01 both in GAT-based federated model training and the fine-tuning stage. |