Private Semi-Supervised Federated Learning

Authors: Chenyou Fan, Junjie Hu, Jianwei Huang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments and Discussions We describe the datasets, parameter choices and models we experiment on. Then we analyse the performance with visual and textual tasks with ablation studies and visualizations.
Researcher Affiliation Academia Chenyou Fan1 , Junjie Hu2 , Jianwei Huang2,3 1School of Artificial Intelligence, South China Normal University, China 2Shenzhen Institute of Artificial Intelligence and Robotics for Society, China 3School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China
Pseudocode Yes Algorithm 1: Fed SSL algorithm. Algorithm 2: Pseudo code for Fed SSL-DP.
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the code for the work described, nor does it provide a direct link to a source-code repository.
Open Datasets Yes CIFAR-10 [Krizhevsky, 2009] is a common image recognition dataset... MNIST [Le Cun et al., 1998] is a digit recognition dataset... Sent140 [Caldas et al., 2018] is an FL benchmark...
Dataset Splits No The paper describes splits into labeled and unlabeled instances (e.g., 'holding out 5000 (10%), 2500 (5%), and 500 (1%) as labeled instances, respectively, and keeping the rest as unlabeled instances'), but it does not explicitly specify separate training, validation, and test dataset splits with exact percentages or counts for hyperparameter tuning.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup No The paper mentions 'parameter choices' in the experimental section but does not provide specific hyperparameters (e.g., learning rate, batch size, epochs, optimizer settings) or detailed training configurations in the main text.