IBA: Towards Irreversible Backdoor Attacks in Federated Learning
Authors: Thuy Dung Nguyen, Tuan A. Nguyen, Anh Tran, Khoa D Doan, Kok-Seng Wong
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed attack framework on several benchmark datasets, including MNIST, CIFAR-10, and Tiny Image Net, and achieved high success rates while simultaneously bypassing existing backdoor defenses and achieving a more durable backdoor effect compared to other backdoor attacks. Overall, IBA2 offers a more effective, stealthy, and durable approach to backdoor attacks in FL. |
| Researcher Affiliation | Collaboration | Dung Thuy Nguyen1,2 , Tuan Nguyen2,3, Tuan Anh Tran4, Khoa D Doan2,3, Kok-Seng Wong2,3 1 Department of Computer Science, Vanderbilt University, Nashville, TN 37212, USA 2Vin Uni-Illinois Smart Health Center, Vin University, Hanoi, Vietnam 3College of Engineering & Computer Science, Vin University, Hanoi, Vietnam 4Vin AI Research, Hanoi, Vietnam |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code for this paper is published at https://github.com/sail-research/iba. |
| Open Datasets | Yes | The IBA method is evaluated on three classification datasets: MNIST, CIFAR-10, and Tiny Image Net. We simulate heterogeneous data partitioning by sampling pk Dir K(0.5)/Dir K(0.01) for MNIST, CIFAR-10/Tiny Image Net and allocating a proportion of each class to participating clients. |
| Dataset Splits | No | The paper mentions simulating data partitioning and refers to the supplementary material for details, but does not explicitly provide specific training/validation/test split percentages or sample counts within the main text. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., specific GPU/CPU models, memory, or cloud instance types). |
| Software Dependencies | No | The paper mentions using the U-Net architecture but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We follow the established protocol outlined in previous works [28, 32], employing stochastic gradient descent (SGD) optimization with E local epochs, a local learning rate of lr, and a batch size of 32. The constraint values are set to ϵ = 0.3, ˆϵ = 0.05; while α = 0.5 and β = 0.5 define additional parameter values. During trigger training, the threshold for backdoor accuracy (BA) is established as local BA = 0.85. To facilitate the generation of updates, the learning rate for the update generation model is determined as γA = 0.0001, and for the update classifier model, the learning rate is set to η = 0.01. |