Federated Adversarial Learning: A Framework with Convergence Analysis
Authors: Xiaoxiao Li, Zhao Song, Jiaming Yang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our main result theoretically shows that the minimum loss under our algorithm can converge to ϵ small with chosen learning rate and communication rounds. It is noteworthy that our analysis is feasible for non-IID clients. ... We conduct experiments in Section 6 to verify Theorem 4.1 empirically. |
| Researcher Affiliation | Collaboration | 1University of British Columbia, BC, Canada. 2Adobe Research, CA, USA. 3University of Michigan, Ann Arbor, MI, USA. . Correspondence to: Zhao Song <zsong@adobe.com>, Jiaming Yang <jiamyang@umich.edu>. |
| Pseudocode | Yes | Algorithm 1 Federated Adversarial Learning (FAL) |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | No | We simulate synthetic data with different levels of data separability as shown in Fig. 1. ... For each class, we simulated 400 data points as training sets and 100 data points as a testing set. |
| Dataset Splits | No | The paper mentions training and testing sets, but no explicit validation set or split is described. "For each class, we simulated 400 data points as training sets and 100 data points as a testing set." |
| Hardware Specification | No | The paper does not specify any hardware details such as CPU/GPU models or cloud resources used for experiments. |
| Software Dependencies | No | The paper mentions "PGD (Madry et al., 2018)", "Fed Avg (Mc Mahan et al., 2017)", and "SGD optimizer" but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We deploy PGD (Madry et al., 2018) to generate adversarial examples during FAL training with the box of radius ρ = 0.0314, each perturbation step of 7, and step length of 0.00784. Model aggregation follows Fed Avg (Mc Mahan et al., 2017) after each local update. We use SGD optimizer with batch size 50. ... training convergence for high (blue) and medium (green) separability datasets with learning rate 1e-5. |