ZeroFL: Efficient On-Device Training for Federated Learning with Local Sparsity
Authors: Xinchi Qiu, Javier Fernandez-Marques, Pedro PB Gusmao, Yan Gao, Titouan Parcollet, Nicholas Donald Lane
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments are conducted on two image classification tasks of different complexity both in terms of the number of samples and classes: FEMNIST (Caldas et al., 2018) and CIFAR10 (Krizhevsky et al., 2009). FEMNIST is constructed by partitioning the data of the Extended MNIST (Cohen et al., 2017) based on the writers of the digit-character. We also include the Speech Commands dataset (Warden, 2018), where the task is to classify 1-second long audio clips. Further details for these datasets can be found in the Appendix A.5. |
| Researcher Affiliation | Academia | 1 Department of Computer Science and Technology, University of Cambridge 2 Department of Computer Science, University of Oxford 3 Laboratoire Informatique d Avignon, Avignon Universit e |
| Pseudocode | Yes | Algorithm 1 Zero FL: Let us consider a cluster of N total client with n local data set and each with a learning rate ηt at round t with T the total number of communication rounds. The client has the data set nk. The number of local epoch is E and the number of clients participating in each round is denoted as K. wt represent all the weights aggregated at round t and dt the difference of weights. Central server does: |
| Open Source Code | No | The paper does not contain an unambiguous statement of open-source code release for the described methodology, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | Experiments are conducted on two image classification tasks of different complexity both in terms of the number of samples and classes: FEMNIST (Caldas et al., 2018) and CIFAR10 (Krizhevsky et al., 2009). ... We also include the Speech Commands dataset (Warden, 2018) |
| Dataset Splits | Yes | In both scenarios we randomly extract 10% out from the training set for validation. This is done at the client level, i.e., the validation set for each client is extracted from each client s training partition. |
| Hardware Specification | No | The paper mentions target platforms like 'mobile CPUs and GPUs' and 'Cortex-A mobile CPUs' in general context but does not specify the exact hardware (model numbers, specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Flower toolkit (Beutel et al., 2020)' but does not provide version numbers for any software dependencies. |
| Experiment Setup | Yes | The start and end learning rate are 0.1 and 0.01 respectively for CIFAR10, 0.01 and 0.001 for Speech Commands, and 0.004 and 0.001 for FEMNIST. |