(FL)$^2$: Overcoming Few Labels in Federated Semi-Supervised Learning

Authors: Seungjoo Lee, Thanh-Long V. Le, Jaemin Shin, Sung-Ju Lee

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on three benchmark datasets demonstrate that our approach significantly improves performance and bridges the gap with SSL, particularly in scenarios with scarce labeled data.
Researcher Affiliation Academia Seungjoo Lee Thanh-Long V. Le Jaemin Shin Sung-Ju Lee KAIST Republic of Korea {seungjoo.lee,thanhlong0780,jaemin.shin,profsj}@kaist.ac.kr
Pseudocode Yes A Algorithm Algorithm 1 (FL)2: Few-Labels Federated semi-supervised Learning
Open Source Code Yes The source code is available at https://github.com/seungjoo-ai/FLFL-Neur IPS24
Open Datasets Yes Data setup We evaluate (FL)2 in three public datasets: CIFAR10, CIFAR100 [12], and SVHN [32].
Dataset Splits No The paper describes data distribution (IID/non-IID) and the number of labeled/unlabeled samples, but does not explicitly provide percentages or counts for train/validation/test splits.
Hardware Specification Yes We used RTX3090 GPUs throughout the experiment.
Software Dependencies No The paper mentions 'official Py Torch implementation' and 'Git Hub repository for Semi FL', but does not specify version numbers for PyTorch or other software dependencies.
Experiment Setup Yes In our experiments, we use 100 clients, with a participation ratio of 0.1 per communication round (K = 10)...Both the server and clients optimize their local datasets for five local epochs, with 800 communication rounds. We employ the momentum SGD optimizer with a learning rate of 0.03, momentum of 0.9, and weight decay of 5e-4...the perturbation strength ρ is set to 0.1 for the CIFAR10 and SVHN datasets and 1.0 for the CIFAR100 dataset. (Section 5, Learning setup) and Table 4: Hyperparameters in our experiments Method Fed Match [7] Fed Con [9] Semi FL [8] (FL)2 (Appendix C)