Federated Recommendation with Additive Personalization

Authors: Zhiwei Li, Guodong Long, Tianyi Zhou

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A thorough experimental study has been conducted to assess the performance of the introduced Fed RAP on six popular recommendation datasets: Movie Lens-100K (ML-100K), Movie Lens-1M (ML-1M), Amazon-Instant-Video (Video), Last FM-2K (Last FM) (Cantador et al., 2011), Ta Feng Grocery (Ta Feng), and QB-article (Yuan et al., 2022).
Researcher Affiliation Academia 1 Australian AI Institute, Faculty of Engineering and IT, University of Technology Sydney 2 Department of Computer Science, University of Maryland, College Park
Pseudocode Yes Algorithm 1 Federated Recommendation with Additive Personalization (Fed RAP)
Open Source Code Yes Our code is available at https://github.com/mtics/Fed RAP.
Open Datasets Yes A thorough experimental study has been conducted to assess the performance of the introduced Fed RAP on six popular recommendation datasets: Movie Lens-100K (ML-100K)1, Movie Lens-1M (ML-1M)1, Amazon-Instant-Video (Video)2, Last FM-2K (Last FM)3, Ta Feng Grocery (Ta Feng)4, and QB-article5 are public. (Footnotes provide specific URLs)
Dataset Splits Yes Following precedent (He et al., 2017; Zhang et al., 2023a), we used a leave-one-out strategy for dataset split evaluation.
Hardware Specification Yes all models were implemented using PyTorch (Paszke et al., 2019), and experiments were conducted on a machine equipped with a 2.5GHz 14-Core Intel Core i9-12900H processor, a RTX 3070 Ti Laptop GPU, and 64GB of memory.
Software Dependencies No The paper mentions 'PyTorch (Paszke et al., 2019)' and 'Opacus libaray (Yousefpour et al., 2021)' but does not specify version numbers for these software dependencies, which are required for full reproducibility.
Experiment Setup Yes for each positive sample, we randomly selected 4 negative samples. We performed hyperparameter tuning for all methods, the parameter v1 of our method Fed RAP in the range {10i|i = 6, . . . , 0}, and the parameter v2 of Fed RAP in the range {10i|i = 3, . . . , 3}. Given that the second and third terms in Eq. 3 gradually come into effect during training, we pragmatically set the function λ(a,v1) = tanh( a / 10) v1 and µ(a,v2) = tanh( a / 10) v2, where a is the number of iterations. For PFed Rec and Fed RAP, the maximum number of server-client communications and client-side training iterations were set to 100 and 10 respectively. For NCF, Fed MF and Fed NCF, we set the number of training iterations to 200. To be fair, all methods used fixed latent embedding dimensions of 32 and a batch size of 2048.