Federated Robustness Propagation: Sharing Adversarial Robustness in Heterogeneous Federated Learning

Authors: Junyuan Hong, Haotao Wang, Zhangyang Wang, Jiayu Zhou

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the rationality and effectiveness of our method through extensive experiments. Especially, the proposed method is shown to grant federated models remarkable robustness even when only a small portion of users afford AT during learning. Source code can be accessed at https://github.com/illidanlab/Fed RBN.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, Michigan State University 2Department of Computer Science and Engineering, University of Texas at Austin
Pseudocode Yes Algorithm 1: Fed RBN: user-end training Algorithm 2: Fed RBN: server-end training
Open Source Code Yes Source code can be accessed at https://github.com/illidanlab/Fed RBN.
Open Datasets Yes We used two multi-domain datasets for the setting. The first is a subset (30%) of DIGITS, a benchmark for domain adaption (Peng et al. 2019b). DIGITS includes 5 different domains: MNIST (MM) (Lecun et al. 1998), SVHN (SV) (Netzer et al. 2011), USPS (US) (Hull 1994), Synth Digits (SY) (Ganin and Lempitsky 2015), and MNIST-M (MM) (Ganin and Lempitsky 2015). The second dataset is DOMAINNET (Peng et al. 2019a) processed by (Li et al. 2020b).
Dataset Splits No We uniformly split the dataset for each domain into 10 subsets for DIGITS and 5 for DOMAINNET, following (Li et al. 2020b), which are distributed to different users, respectively. ... For AT users, we use n-step PGD (projected gradient descent) attack (Madry et al. 2018) with a constant noise magnitude ϵ. Following (Madry et al. 2018), we use ϵ = 8/255, n = 7, and attack inner-loop step size 2/255, for training, validation, and test.
Hardware Specification No The paper does not mention any specific hardware (e.g., GPU model, CPU type, memory size) used for running the experiments. It only refers to 'resource constraints' and 'computation capacities' generally.
Software Dependencies No The paper does not specify any software dependencies with version numbers, such as Python, PyTorch, TensorFlow, or other libraries.
Experiment Setup Yes For AT users, we use n-step PGD (projected gradient descent) attack (Madry et al. 2018) with a constant noise magnitude ϵ. Following (Madry et al. 2018), we use ϵ = 8/255, n = 7, and attack inner-loop step size 2/255, for training, validation, and test. We uniformly split the dataset for each domain into 10 subsets for DIGITS and 5 for DOMAINNET... Each user trains local model for one epoch per communication round.