Parameter Disparities Dissection for Backdoor Defense in Heterogeneous Federated Learning
Authors: Wenke Huang, Mang Ye, Zekun Shi, Guancheng Wan, He Li, Bo Du
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on various heterogeneous federated scenarios under backdoor attacks demonstrate the effectiveness of our method. |
| Researcher Affiliation | Academia | 1 National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence, Hubei Key Laboratory of Multimedia and Network Communication Engineering, School of Computer Science, Wuhan University, Wuhan, China. 2 Taikang Center for Life and Medical Sciences, Wuhan University, Wuhan, China {wenkehuang,yemang}@whu.edu.cn |
| Pseudocode | Yes | Algorithm 1: FDCR |
| Open Source Code | Yes | https://github.com/wenkehuang/FDCR |
| Open Datasets | Yes | Datasets. Adhere to [105, 70, 51, 30], we evaluate the efficacy and robustness on three scenarios: Cifar-10 [43] contains 50k, 10k images for training, validation. Each image is in 32 × 32 size from 10 different classes, e.g., airplanes, cars, and birds. MNIST [46] is a famous digits dataset with 70,000 images in 10 classes. Fashion-MNIST [103] has 60k train and 10k test examples from 10 classes. |
| Dataset Splits | Yes | Dataset Split: We partition the original training data into training and validation sets with a 9:1 ratio to support Proxy Evaluation Defense methods. |
| Hardware Specification | Yes | We fix the random seed to ensure reproduction and conduct experiments on the NVIDIA 3090Ti. |
| Software Dependencies | No | The paper mentions 'We utilize the SGD as the local updating optimizer,' but it does not specify version numbers for programming languages or key software libraries used in the experiments, such as Python, PyTorch, or TensorFlow versions. |
| Experiment Setup | Yes | We configure the communication epoch T as 50... The client number K is 10... The local updating round is E : 10... The corresponding weight decay is η :1e 5 and momentum is 0.9. The local client learning rate is 0.01 in the above three scenarios. |