Lockdown: Backdoor Defense for Federated Learning with Isolated Subspace Training

Authors: Tiansheng Huang, Sihao Hu, Ka-Ho Chow, Fatih Ilhan, Selim Tekin, Ling Liu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental However, our empirical study shows that traditional pruning-based solution suffers poison-coupling effect in FL, which significantly degrades the defense performance.Empirical results show that Lockdown achieves superior and consistent defense performance compared to existing representative approaches against backdoor attacks.
Researcher Affiliation Academia Tiansheng Huang, Sihao Hu, Ka-Ho Chow, Fatih Ilhan, Selim Furkan Tekin, Ling Liu School of Computer Science Georgia Institute of Technology, Atlanta, USA {thuang374, shu335, kchow35, filhan3, stekin6}@gatech.edu, ling.liu@cc.gatech.edu
Pseudocode Yes Algorithm 1 Lockdown defense
Open Source Code Yes Our code is available at https://github.com/git-disl/Lockdown.
Open Datasets Yes Datasets and models. We experiment on Fashion Mnist, CIFAR10/CIFAR100 and Tiny Imagenet datasets.
Dataset Splits Yes We simulate M = 40 clients, and data is either evenly distributed to each client (IID setting) or is distributed with Dirichlet distribution (Non-IID setting) following (Hsu et al., 2019). The parameter for Dirichlet distribution is set to 0.5 for the Non-IID partition. To simulate the backdoor attack launched by the malicious clients, we follow (Ozdayi et al., 2021) to randomly choose N clients as attackers whose p (percentage) of data in their local datasets are poisoned. The default backdoor settings for our main experiment are p = 50% and N = 4.
Hardware Specification Yes All the experiments are done with a Nvidia A100 GPU.
Software Dependencies No The paper refers to "Py Torch style code" but does not specify version numbers for PyTorch, Python, or any other software libraries used in the experiments.
Experiment Setup Yes For Lockdown, we fix the overall sparsity to s = 0.25, the mask agreement threshold to θ = 20, and the initial pruning rate to α0 = 1e 4. The robust learning rate threshold for RLR is set to 8. The number of local epochs and batch size are fixed to 2 and 64, respectively. The learning rate and weight decay used in the local optimizer are fixed to 0.1 and 10 4. The number of comm rounds is fixed to 200.