FedDisco: Federated Learning with Discrepancy-Aware Collaboration
Authors: Rui Ye, Mingkai Xu, Jianyu Wang, Chenxin Xu, Siheng Chen, Yanfeng Wang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our Fed Disco outperforms several state-of-the-art methods and can be easily incorporated with many existing methods to further enhance the performance. Our code will be available at https://github.com/Media Brain-SJTU/Fed Disco. |
| Researcher Affiliation | Collaboration | 1Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai, China 2Carnegie Mellon University, Pittsburgh, PA, USA 3Shanghai AI Laboratory, Shanghai, China. |
| Pseudocode | Yes | We also provide an algorithm flow in Algorithm 1. |
| Open Source Code | Yes | Our code will be available at https://github.com/Media Brain-SJTU/Fed Disco. |
| Open Datasets | Yes | We consider five image classification datasets to cover medical, natural and artificial scenarios, including HAM10000 (Tschandl et al., 2018), CIFAR-10 & CIFAR100 (Krizhevsky et al., 2009), CINIC-10 (Darlow et al., 2018) and Fashion-MNIST (Xiao et al., 2017); and AG News (Zhang et al., 2015), a text classification dataset. |
| Dataset Splits | No | The paper describes how data is distributed across clients (e.g., Dirichlet distribution, biased/unbiased clients) and mentions using a 'global testing set' or 'uniform testing set' for HAM10000, but it does not consistently specify training, validation, and test splits for all datasets, nor does it provide exact percentages or sample counts for a general validation split. |
| Hardware Specification | Yes | We run all methods by using Pytorch framework (Paszke et al., 2019) on a single NVIDIA GTX 3090 GPU. |
| Software Dependencies | No | The paper mentions using 'Pytorch framework' but does not specify a version number for Pytorch or any other software libraries required for reproducibility. |
| Experiment Setup | Yes | The number of local epochs and batch size are 10 and 64, respectively. We run federated learning for 100 rounds. We use Res Net18 (He et al., 2016) for HAM10000, a simple CNN network for other image datasets and Text CNN (Zhang & Wallace, 2015) for AG News. We use SGD optimizer with a 0.01 learning rate. |