FedSC: Provable Federated Self-supervised Learning with Spectral Contrastive Objective over Non-i.i.d. Data

Authors: Shusen Jing, Anlan Yu, Shuai Zhang, Songyang Zhang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results validate the effectiveness of our proposed algorithm.
Researcher Affiliation Academia 1Department of Radiation Oncology, University of California, San Francisco, California, USA 2Department of Electrical and Computer Engineering, Lehigh University, Bethlehem, Pennsylvania, USA 3Department of Data Science, New Jersey Institute of Technology, Newark, New Jersey, USA 4Department of Electrical and Computer Engineering, University of Louisiana at Lafayette, Lafayette, Louisiana, USA.
Pseudocode Yes Algorithm 1 Fed SC; Algorithm 2 DP-Cal R
Open Source Code No The paper does not provide a specific link to an open-source code repository or an explicit statement about releasing the code for the described methodology.
Open Datasets Yes Datasets: Three datasets, SVHN, CIFAR10 and CIFAR100, are used for evaluation.
Dataset Splits Yes SVHN is split into 5 disjoint local datasets, each of which contains 2 classes. CIFAR10 is split into 10 disjoint local datasets according to the 10 classes. CIFAR100 is split into 20 disjoint local datasets, each of which contains 5 classes. Therefore, the size of local datasets for SVHN, CIFAR10 and CIFAR100 tasks are 10, 000, 5, 000 and 2, 500, respectively.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) that would be needed for reproducibility.
Experiment Setup Yes Hyper-parameters: For all three tasks, the number of communication round T = 200, and the number of local epochs is E = 5. For SVHN and CIFAR10, the batch size is B = 512. For CIFAR100, the batch size is B = 256. The number of views V = 2 for all experiments. For correlation matrices sharing, the number of views is set as V = 5.