Federated Learning with Label Distribution Skew via Logits Calibration
Authors: Jie Zhang, Zhiqi Li, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Chao Wu
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on federated datasets and real-world datasets demonstrate that Fed LC leads to a more accurate global model and much improved performance. |
| Researcher Affiliation | Collaboration | 1Zhejiang University, China 2Youtu Lab, Tencent, China. |
| Pseudocode | Yes | The complete pseudo code of Fed LC can be found in Appendix 1. Algorithm 1 Federated Learning via Logits Calibration (for Client) |
| Open Source Code | No | The paper does not contain an explicit statement or a link indicating the availability of its source code. |
| Open Datasets | Yes | In this study, we conduct a number of experiments on popular image classification benchmark datasets: SVHN (Netzer et al., 2011), CIFAR10 (Krizhevsky et al., 2009), CIFAR100 (Krizhevsky et al., 2009) and Image Net (Deng et al., 2009), as well as federated datasets (Synthetic dataset and FEMNIST) proposed in LEAF (Caldas et al., 2019). |
| Dataset Splits | No | The paper discusses training data and testing data but does not explicitly mention using a separate validation split or how such a split was created or accessed. |
| Hardware Specification | Yes | all experiments are conducted with 8 Tesla V100 GPUs. |
| Software Dependencies | No | The paper states "We implement the typical federated setting (Mc Mahan et al., 2017) in Pytorch," but it does not specify the version number for Pytorch or any other software dependency. |
| Experiment Setup | Yes | The size of local mini-batch is 128. For local training, each client updates the weights via SGD optimizer with learning rate η = 0.01 without weight decay. We run each experiment with 5 random seeds and report the average and standard deviation. |