On the Vulnerability of Backdoor Defenses for Federated Learning
Authors: Pei Fang, Jinghui Chen
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through comprehensive experiments and in-depth case study on several state-of-the-art federated backdoor defenses, we summarize our main contributions and findings as follows: We propose a persistent and stealthy backdoor attack for federated learning... In a case study, we examine the effectiveness of several recent federated backdoor defenses from three major categories and give practical guidelines for the choice of the backdoor defenses for different settings. |
| Researcher Affiliation | Academia | Pei Fang1, Jinghui Chen2 1Tongji University 2Pennsylvania State University greilfang@gmail.com, jzc5917@psu.edu |
| Pseudocode | Yes | The complete algorithm with the detail of trigger optimization is in the Appendix. |
| Open Source Code | No | The paper does not include any explicit statement about releasing source code or provide a link to a code repository. |
| Open Datasets | Yes | We test on CIFAR-10 (Krizhevsky and Hinton 2009) and Tiny-Image Net (Le and Yang 2015) |
| Dataset Splits | No | The paper mentions using CIFAR-10 and Tiny-ImageNet datasets and notes 'non i.i.d. data with the concentration parameter h = 1.0', but it does not specify explicit train/validation/test split percentages, sample counts, or refer to predefined splits to reproduce the exact data partitioning. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., specific GPU or CPU models, memory, or cloud instances). |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies, libraries, or programming languages used in the experiments. |
| Experiment Setup | Yes | For more details, we set the non i.i.d. data with the concentration parameter h = 1.0 and the total number of clients c is 20 with 4 malicious clients. Each selected client in F3BA locally trains two epochs as benign clients before proposing the model to the server. |