Federated Learning from Pre-Trained Models: A Contrastive Learning Approach

Authors: Yue Tan, Guodong Long, Jie Ma, LU LIU, Tianyi Zhou, Jing Jiang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform a thorough evaluation of the proposed Fed PCL in the lightweight framework, measuring and visualizing its ability to fuse various pre-trained models on popular FL datasets.
Researcher Affiliation Collaboration Yue Tan1, Guodong Long1, Jie Ma1, Lu Liu2, Tianyi Zhou3,4, Jing Jiang1 1Australian Artificial Intelligence Institute, FEIT, University of Technology Sydney 2Google Research, 3University of Washington, 4University of Maryland
Pseudocode Yes Algorithm 1 Fed PCL
Open Source Code No The paper states 'We implement all the methods using Py Torch and conduct all experiments on one NVIDIA Tesla V100 GPU.' but does not provide a specific link or explicit statement about the release of its source code.
Open Datasets Yes We evaluate our proposed framework on the following three benchmark datasets: Digit-5 [21], Office-10 [71], and Domain Net dataset [72].
Dataset Splits No The paper lists benchmark datasets and provides some hyperparameter details but does not explicitly provide information about training/validation/test splits, such as percentages, sample counts, or a clear methodology for creating a validation set. It focuses on reporting test accuracy.
Hardware Specification Yes We implement all the methods using Py Torch and conduct all experiments on one NVIDIA Tesla V100 GPU.
Software Dependencies No The paper states 'We implement all the methods using Py Torch' but does not provide a specific version number for PyTorch or any other software dependencies.
Experiment Setup Yes We use a batch size of 32, and an Adam [74] optimizer with weight decay 1e-4 and learning rate 0.001. The default setting for local update epochs is E = 1 and the temperature τ is 0.07.