Backdoor Federated Learning by Poisoning Backdoor-Critical Layers
Authors: Haomin Zhuang, Mingxian Yu, Hao Wang, Yang Hua, Jian Li, Xu Yuan
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our BC layer-aware backdoor attacks can successfully backdoor FL under seven SOTA defenses with only 10% malicious clients and outperform latest backdoor attack methods. |
| Researcher Affiliation | Academia | Haomin Zhuang1, Mingxian Yu1 , Hao Wang2, Yang Hua3, Jian Li4, Xu Yuan5 1South China University of Technology 2Louisiana State University 3Queen s University Belfast, UK 4Stony Brook University 5University of Delaware |
| Pseudocode | No | The paper describes methods with numbered steps and diagrams (e.g., Section 3, Figure 2), but it does not include formal pseudocode blocks or algorithms labeled as such. |
| Open Source Code | No | The paper does not contain any explicit statements about making the source code available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Datasets: Fashion-MNIST (60,000 images for training and 10,000 for testing with ten classes) and CIFAR-10 (50,000 for training and 10,000 for testing with ten classes).FEMNIST is a real-world dataset included in LEAF (Caldas et al., 2018). |
| Dataset Splits | Yes | A local dataset D(i) in i-th malicious client is split into training sets D(i) clean,train and D(i) poison,train as well as validation sets D(i) clean,val and D(i) poison,val. |
| Hardware Specification | Yes | We conduct all experiments using a NVIDIA RTX A5000 GPU. |
| Software Dependencies | No | The paper mentions using 'Py Torch' but does not specify a version number or other software dependencies with their respective versions. |
| Experiment Setup | Yes | The proportion of clients selected in each round among n = 100 clients is C = 0.1. Each selected clients train E = 2 epochs in the local dataset with batch size B = 64. The server trains the global model with R = 200 rounds to make it converge. We set τ = 0.95 when identifying the BC layers via Layer Substitution Analysis... We set λ = 1 when training on CIFAR-10 and λ = 0.5 when training on Fashion-MNIST... Table A-7 in Appendix shows the detailed hyperparameter settings. |