Dual Personalization on Federated Recommendation
Authors: Chunxu Zhang, Guodong Long, Tianyi Zhou, Peng Yan, Zijian Zhang, Chengqi Zhang, Bo Yang
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on multiple benchmark datasets have demonstrated the effectiveness of PFed Rec and the dual personalization mechanism. |
| Researcher Affiliation | Academia | 1Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, China 2College of Computer Science and Technology, Jilin University, China 3Australian Artificial Intelligence Institute, FEIT, University of Technology Sydney 4Computer Science and UMIACS, University of Maryland {cxzhang19, zhangzj2114}@mails.jlu.edu.cn, {guodong.long, Chengqi.Zhang}@uts.edu.au, zhou@umiacs.umd.edu, yanpeng9008@hotmail.com, ybo@jlu.edu.cn |
| Pseudocode | Yes | Algorithm 1 Optimization Workflow of PFed Rec |
| Open Source Code | Yes | The code is available. Code: https://github.com/Zhangcx19/IJCAI-23-PFed Rec |
| Open Datasets | Yes | We evaluate the proposed PFed Rec on four real-world datasets: Movie Lens-100K, Movie Lens-1M [Harper and Konstan, 2015], Lastfm-2K [Cantador et al., 2011] and Amazon-Video [Ni et al., 2019]. |
| Dataset Splits | Yes | For dataset split, We follow the prevalent leave-one-out evaluation [He et al., 2017]. |
| Hardware Specification | No | No specific hardware details (such as GPU/CPU models, memory, or cloud instance specifications) used for running experiments are provided in the paper. |
| Software Dependencies | No | We implement the methods based on the Pytorch framework. No version number for Pytorch or other specific software dependencies are provided. |
| Experiment Setup | Yes | We sample 4 negative instances for each positive instance following [He et al., 2017]. For a fair comparison, we set the user (item) embedding size as 32 and the batch size is fixed as 256 for all methods, and set other model details of baselines according to their original papers. The total number of communication rounds is set to 100, and this value enables all methods to be trained to converge through experiments. For our method, we assign the score function with a one-layer MLP for simplification. |