DBA: Distributed Backdoor Attacks against Federated Learning
Authors: Chulin Xie, Keli Huang, Pin-Yu Chen, Bo Li
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments to show that the attack success rate of DBA is significantly higher than centralized backdoors under different settings. |
| Researcher Affiliation | Collaboration | Chulin Xie Zhejiang University chulinxie@zju.edu.cn Keli Huang Shanghai Jiao Tong University nick cooper@sjtu.edu.cn Pin-Yu Chen IBM Research pin-yu.chen@ibm.com Bo Li University of Illinois Urbana-Champaign lbo@illinois.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | DBA is evaluated on four classification datasets with non-i.i.d. data distributions: Lending Club Loan Data(LOAN)(Kan, 2019), MNIST, CIFAR-10 and Tiny-imagenet. The data description and parameter setups are summarized in Tb.1. We refer the readers to Appendix A.1 for more details. |
| Dataset Splits | Yes | The financial dataset LOAN contains the current loan status (Current, Late, Fully Paid, etc.) and latest payment information, which can be used for loan status prediction. It consists of 1,808,534 data samples and we divide them by 51 US states, each of whom represents a participant in FL. 80% of data samples are used for training and the rest is for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. |
| Experiment Setup | Yes | Following the standard setup, we use SGD and trains for E local epochs with local learning rate lr and batch size 64. A shared global model is trained by all participants, 10 of them are selected in each round for aggregation. The local and global triggers used are summarized in Appendix A.1. Every attacker s batch is mixed with correctly labeled data and such backdoored data with poison ratio r (see Tb.1). Attackers have their own local poison lr and poison E (see Tb.1) to maximize their backdoor performance and remain stealthy. |