Towards Unbiased Training in Federated Open-world Semi-supervised Learning

Authors: Jie Zhang, Xiaosong Ma, Song Guo, Wenchao Xu

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4. Experiments
Researcher Affiliation Academia 1Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China.
Pseudocode Yes Algorithm 1 Fedo SSL Algorithm
Open Source Code No The paper does not provide an explicit statement about the release of its source code or a link to a code repository.
Open Datasets Yes We evaluate the Fedo SSL framework over three datasets CIFAR-10, CIFAR-100, and CINIC-10 (Darlow et al., 2018).
Dataset Splits Yes For all datasets, we first divide classes into 60% seen and 40% unseen classes, then select 50% of seen classes as the labeled data and the rest as unlabeled data. For CIFAR-10 and CINIC-10 datasets, one class of unseen classes is selected as the globally unseen class and rest 3 classes are locally unseen classes, each client owns all 6 seen classes, one globally unseen class and one locally unseen class. For CIFAR-100 dataset, 10 classes of unseen classes are selected as the globally unseen class and rest 30 classes are locally unseen classes, each client owns all 60 seen classes, 10 globally unseen classes and 10 locally unseen classes.
Hardware Specification Yes We simulate all clients and the server on a workstation with an RTX 2080Ti GPU, a 3.6-GHZ Intel Core i9-9900KF CPU and 64GB of RAM.
Software Dependencies No The paper mentions using 'Res Net-18' as a backbone model and 'standard SGD', but does not specify software library versions (e.g., PyTorch, TensorFlow, CUDA) or specific solver versions.
Experiment Setup Yes Implementation Details. For all datasets, we use Res Net-18 as the backbone model and train the model using standard SGD with a momentum of 0.9 and a weight decay of 5 10 4. The dimension of the classifier corresponds to the number of classes in each dataset. Unless otherwise explicitly specified, α, β, γ are set to 1. The model is trained for 50 global rounds with 5 local epochs in each round. The batch size is 512 for all experiments. Similar to ORCA (Cao et al., 2022), we only update the parameters of the last block of Res Net in the second training stage to avoid over-fitting.