Federated Learning from Only Unlabeled Data with Class-conditional-sharing Clients

Authors: Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on benchmark and real-world datasets demonstrate the effectiveness of Fed UL. In this section, we conduct experiments to validate the effectiveness of the proposed Fed UL method under various testing scenarios.
Researcher Affiliation Academia Nan Lu1 Zhao Wang2 Xiaoxiao Li3 Gang Niu4 Qi Dou2 Masashi Sugiyama4,1 1The University of Tokyo 2The Chinese University of Hong Kong 3The University of British Columbia 4RIKEN
Pseudocode Yes Algorithm 1 Federation of unsupervised learning (Fed UL)
Open Source Code Yes Code is available at https://github.com/lunanbit/Fed UL.
Open Datasets Yes we first perform experiments on widely adopted benchmarks MNIST (Le Cun et al., 1998) and CIFAR10 (Krizhevsky et al., 2009). We use the RSNA Intracranial Hemorrhage Detection dataset (Flanders et al., 2020)
Dataset Splits Yes Please note for all experiments, we split 20% original data for validation and model selection.
Hardware Specification Yes We implement all the methods by Py Torch and conduct all the experiments on an NVIDIA TITAN X GPU.
Software Dependencies No The paper mentions software like PyTorch, Adam optimizer, Le Net, and Res Net18 but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes For model training, we use the cross-entropy loss and Adam (Kingma & Ba, 2015) optimizer with a learning rate of 1e 4 and train 100 rounds. If not specified, our default setting for local update epochs (E) is 1 and batch size (B) is 128 with 5 clients (C) in our FL system. During training process, we use Adam optimizer with learning rate 1e 4 and the cross-entropy loss. We set batch size to 128 and training rounds to 100. To avoid overfitting, we use L1 regularization. The detailed weight of L1 regularization used in benchmark experiments is shown in Table D.4.