Byzantine-Robust Federated Learning: Impact of Client Subsampling and Local Updates
Authors: Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Geovani Rizk, Sasha Voitovych
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our theory by experiments on the FEMNIST and CIFAR-10 image classification tasks. |
| Researcher Affiliation | Academia | 1EPFL, 2Sorbonne Universit e, LPSM 3University of Toronto. |
| Pseudocode | Yes | Algorithm 1 Fed Ro: Fed Avg with a robust aggregation rule A |
| Open Source Code | No | No explicit statement or link providing access to the authors' open-source code for their methodology was found. |
| Open Datasets | Yes | We use the FEMNIST dataset (Caldas et al., 2018) and CIFAR-10 dataset (Krizhevsky et al., 2009). |
| Dataset Splits | No | The paper mentions 'training error' and general FL concepts, but it does not specify explicit train/validation/test dataset splits (percentages, counts, or specific predefined splits) used for their experiments. |
| Hardware Specification | Yes | Machines used for all the experiments: 2 NVIDIA A10-24GB GPUs and 8 NVIDIA Titan X Maxwell 16GB GPUs. |
| Software Dependencies | No | The paper mentions the 'LEAF Library' and a shell script for data preprocessing but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We summarize the learning hyperparameters in Table 1. ... We list all the hyperparameters used for this experiment in Table 4. ... We list all the hyperparameters used for this experiment in Table 5. |