Federated Continual Learning via Prompt-based Dual Knowledge Transfer

Authors: Hongming Piao, Yichen Wu, Dapeng Wu, Ying Wei

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experimental results demonstrate the superiority of our method in terms of reduction in communication costs, and enhancement of knowledge transfer.
Researcher Affiliation Academia 1City University of Hong Kong 2Nanyang Technological University. Correspondence to: Hongming Piao <hpiao6-c@my.cityu.edu.hk>, Dapeng Wu <dapengwu@cityu.edu.hk>, Ying Wei <ying.wei@ntu.edu.sg>.
Pseudocode Yes Algorithm 1 The training procedure of Powder.
Open Source Code Yes Code is available at https://github.com/piaohongming/Powder.
Open Datasets Yes Dataset: We construct our benchmarks based on two image datasets commonly used for prompt-based continual learning: Image Net-R and Domain Net. Image Net-R (Hendrycks et al., 2021; Wang et al., 2022a)... Domain Net (Peng et al., 2019)
Dataset Splits Yes Following Dual Prompt (Wang et al., 2022a), we split the dataset into a training set with 24,000 images and a test set with 6,000 images. To search for more suitable values of k and λ, we selected 20% of the training set as a validation set.
Hardware Specification Yes All results are averaged over three runs and are obtained on 46GB NVIDIA RTX A6000 GPU.
Software Dependencies No The paper mentions "Py Torch (Paszke et al., 2019)" but does not provide a specific version number for PyTorch or other libraries used for implementation.
Experiment Setup Yes Local epochs for each round are set to 10 for Image Net-R and 4 for Domain Net for local convergence. ... The learning rate is set to 0.005. The hyperparameters λ and p are set as 1 and 30 respectively. ... M = 10, L = 8 and D = 768 for Fed-CODAP, Fed-CPrompt and Powder for a fair comparison.