Towards Understanding Ensemble Distillation in Federated Learning

Authors: Sejun Park, Kihun Hong, Ganguk Hwang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also provide experimental results to verify our theoretical results on ensemble distillation in federated learning.
Researcher Affiliation Academia 1Department of Mathematical Sciences, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea.
Pseudocode Yes Algorithm 1 KRR with iterative Ensemble Distillation in FL
Open Source Code No No explicit statement providing concrete access to source code for the methodology described in this paper was found. No repository links or explicit code release statements were present.
Open Datasets Yes The real world dataset is a simplified regression version of the MNIST dataset from another existing work (Cui et al., 2021).
Dataset Splits No The paper mentions training data and test data, but no explicit details about a separate validation split (percentages, counts, or methodology) are provided. It states: 'In the test phase, we use a test dataset of size 1000 whose data points are generated from the procedure explained in Appendix E.1.'
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments were provided in the paper.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, or relevant libraries with their versions) were listed in the paper.
Experiment Setup Yes We conduct experiments with the local datasets of sizes N = 10 and N = 20. ... we set the unlabeled public dataset size Np = (m 1)N. For the iterative ensemble distillation algorithm, set the total communication round t = 200 for convergence. We use the fixed distillation hyperparameter α = 1/m but conduct the hyperparemeter tuning for λ > 0 using the grid search.