Meta Knowledge Condensation for Federated Learning
Authors: Ping Liu, Xin Yu, Joey Tianyi Zhou
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness and efficiency of our proposed method. |
| Researcher Affiliation | Academia | Ping Liu Center for Frontier AI Research A*STAR 138632, Singapore {pino.pingliu}@gmail.com Xin Yu Information Technology and Electrical Engineering The University of Queensland Brisbane, Australia {xin.yu}@uq.edu.au Joey Tianyi Zhou Center for Frontier AI Research A*STAR 138632, Singapore {joey_zhou}@cfar.a-star.edu.sg |
| Pseudocode | Yes | Algorithm 1: Fed MK |
| Open Source Code | No | The paper does not include an explicit statement or a link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We evaluate our algorithm and compare to the key related works on four benchmarks: MNIST (Le Cun et al., 2010), SVHN (Netzer et al., 2011), CIFAR10 (Krizhevsky & Hinton, 2009), and CIFAR100 (Krizhevsky & Hinton, 2009). |
| Dataset Splits | Yes | In MNIST, there are 50, 000 images in the training set and 10, 000 images in the test set, and their size is 1 28 28 pixels (c w h)... We use 50% of the training set and distribute it on all clients. All testing data is utilized for evaluation. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or memory specifications). |
| Software Dependencies | No | The paper mentions various methods and models (e.g., Le Net), but it does not specify software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8'). |
| Experiment Setup | Yes | For the methods conducting local model training on clients, i.e., Fed Avg, Fed Prox, Fed Distill, Fed Ensemble, and Fed Gen, we set the local updating number to 20, and the batch size number to 32. In our method, we set meta knowledge size for each datasets based on their different characteristics (e.g., the class number, sample size, etc): 20 per class for MNIST; 100 per class for SVHN; 100 per class for CIFAR10; 40 per class for CIFAR100. We run three trials and report the mean accuracy performance (MAP). |