Federated Adaptation for Foundation Model-based Recommendations
Authors: Chunxu Zhang, Guodong Long, Hongkuan Guo, Xiao Fang, Yang Song, Zhaojie Liu, Guorui Zhou, Zijian Zhang, Yang Liu, Bo Yang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on four benchmark datasets demonstrate our method s superior performance. |
| Researcher Affiliation | Collaboration | 1Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, China 2College of Computer Science and Technology, Jilin University, China 3Australian Artificial Intelligence Institute, FEIT, University of Technology Sydney 4Kuaishou Technology 5Institute for AI Industry Research, Tsinghua University |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available.1Code: https://github.com/Zhangcx19/IJCAI-24-Fed PA |
| Open Datasets | Yes | We evaluate Fed PA on four practical industrial recommendation datasets collected from the short video platform Kuaishou 2, i.e., Kuai Rand 3 (Kuai Rand-Pure and Kuai Randsmall) and Kuai SAR 4 (Kuai SAR-S and Kuai SAR-R).3https://kuairand.com/4https://kuaisar.github.io/ |
| Dataset Splits | Yes | The dataset for the federated recommendation system is further split into train, validation, and test sets for each user based on interaction timestamps, with a ratio of 6:2:2. |
| Hardware Specification | No | The paper mentions 'deploying the pre-trained model on edge devices' and 'clients with limited computation capability' but does not provide specific hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper does not specify versions for any software dependencies, libraries, or programming languages used in the experiments. |
| Experiment Setup | No | The paper describes dataset splits and evaluation protocols but does not provide specific hyperparameter values like learning rate, batch size, number of epochs, or optimizer settings for the experimental setup. |