Federated Semi-Supervised Learning with Inter-Client Consistency & Disjoint Learning

Authors: Wonyong Jeong, Jaehong Yoon, Eunho Yang, Sung Ju Hwang

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experimental validation of our method in the two different scenarios, we show that our method outperforms both local semi-supervised learning and baselines which naively combine federated learning with semi-supervised learning.
Researcher Affiliation Collaboration Wonyong Jeong1, Jaehong Yoon2, Eunho Yang1,3, and Sung Ju Hwang1,3 Graduate School of AI1, KAIST, Seoul, South Korea School of Computing2, KAIST, Daejeon, South Korea AITRICS 3, Seoul, South Korea {wyjeong, jaehong.yoon, eunhoy, sjhwang82}@kaist.ac.kr
Pseudocode Yes Algorithm 1 Labels-at-Client Scenario; Algorithm 2 Labels-at-Server Scenario
Open Source Code Yes The code is available at https://github.com/wyjeong/Fed Match.
Open Datasets Yes We use CIFAR-10 for this task and split 60, 000 instances into training (54, 000), valid (3, 000), and test (3, 000) sets... We use Fashion-MNIST dataset for this task
Dataset Splits Yes We use CIFAR-10 for this task and split 60, 000 instances into training (54, 000), valid (3, 000), and test (3, 000) sets.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions software components like 'SGD' and 'Res Net-9 networks' but does not specify version numbers for any libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages.
Experiment Setup Yes Table 4: Hyper-Parameters & Training Setups We provide all hyper-parameters and training setups for all baseline models and our method. Detailed hyper-parameters are also available in the code. (Includes learning rate, weight decay, batch sizes, etc.)