Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated Learning
Authors: Runhua Xu, Shiqi Gao, Chao Li, James Joshi, Jianxin Li
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted extensive experiments on various model poisoning attacks and FL scenarios, including both cross-device and cross-silo FL. Experiments on publicly available datasets demonstrate that DDFed successfully protects model privacy and effectively defends against model poisoning threats. |
| Researcher Affiliation | Academia | Runhua Xu Beihang University runhua@buaa.edu.cn Shiqi Gao Beihang University gaoshiqi@buaa.edu.cn Chao Li Beijing Jiaotong University li.chao@bjtu.edu.cn James Joshi University of Pittsburgh jjoshi@pitt.edu Jianxin Li Beihang University and Zhongguancun Laboratory lijx@buaa.edu.cn |
| Pseudocode | Yes | Due to space limitations, the formal algorithm pseudocode is provided solely in Appendix A.1. |
| Open Source Code | Yes | The experimental DDFed is available on the Git Hub repository. |
| Open Datasets | Yes | We assessed our proposed DDFed framework using publicly available benchmark datasets: MNIST[19], a collection of handwritten digits, and Fashion-MNIST (FMNIST)[33], which includes images of various clothing items, offering a more challenging and diverse dataset for federated learning tasks. |
| Dataset Splits | No | The paper mentions using MNIST and FMNIST datasets but does not explicitly state the training/validation/test dataset splits needed for reproduction. It mentions creating |
| Hardware Specification | Yes | Note that the time-related experiments were conducted on a Mac OS platform with an Apple M2 Max chip and 96GB of memory. |
| Software Dependencies | No | This secure aggregation is implemented through Ten SEAL library [4]. |
| Experiment Setup | Yes | The default FL training involves 10 clients randomly chosen from 100 for each communication round. Furthermore, we employ a batch size of 64 with each client conducting local training over three epochs per round using an SGD optimizer with a momentum of 0.9 and a learning rate of 0.01. Our DDFed implementation s default epsilon (ε) value is set to 0.01 unless specified differently. |