Adversarial Collaborative Learning on Non-IID Features
Authors: Qinbin Li, Bingsheng He, Dawn Song
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that ADCOL achieves better performance than state-of-the-art FL algorithms on non-IID features. |
| Researcher Affiliation | Academia | 1UC Berkeley 2National University of Singapore. |
| Pseudocode | Yes | Algorithm 1 The ADCOL algorithm |
| Open Source Code | No | The paper does not provide any explicit statement about releasing the source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | Digits: The Digits task has the following five digit data sources from different domains: MNIST (Le Cun et al., 1998), SVHN (Netzer et al., 2011), USPS (Hull, 1994), Synth Digits (Ganin & Lempitsky, 2015), and MNIST-M (Ganin & Lempitsky, 2015). (2) Office-Caltech-10 (Gong et al., 2012):... (3) Domain Net (Peng et al., 2019a): |
| Dataset Splits | No | For each dataset, we randomly split 1/5 of the original dataset as the test dataset, while the remained dataset is used as the training dataset. The paper explicitly defines training and test sets but does not mention a separate validation set. |
| Hardware Specification | Yes | We run the experiments on a server with 8 * NVIDIA Ge Force RTX 3090, a server with 4 * NVIDIA A100, and a cluster with 45 * NVIDIA Ge Force RTX 2080 Ti. |
| Software Dependencies | No | We use Py Torch (Paszke et al., 2019) to implement all approaches. Although PyTorch is mentioned, a specific version number is not provided, nor are other software dependencies with versions. |
| Experiment Setup | Yes | The number of local epochs is set to 10 by default for all FL approaches. The number of epochs is set to 300 for SOLO. For ADCOL and Fed Prox, we tune ยต {10, 1, 0.1, 0.01, 0.001}... We use the SGD optimizer for training with a learning rate of 0.01. The SGD weight decay is set to 10 5 and the SGD momentum is set to 0.9. The batch size is set to 64, 32, and 32 for Digits, Office-Caltech-10, and Domain Net, respectively. |