Dual Calibration-based Personalised Federated Learning
Authors: Xiaoli Tang, Han Yu, Run Tang, Chao Ren, Anran Li, Xiaoxiao Li
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on widely used benchmark datasets demonstrate that DC-PFL outperforms eight state-of-the-art methods, surpassing the best-performing baseline by 1.22% and 9.22% in terms of accuracy on datasets CIFAR-10 and CIFAR-100, respectively. |
| Researcher Affiliation | Academia | 1College of Computing and Data Science, Nanyang Technological University, Singapore 2South China University of Technology, China 3Department of Electrical and Computer Engineering, The University of British Columbia, Canada |
| Pseudocode | Yes | Algorithm 1 DC-PFL |
| Open Source Code | No | The paper does not contain an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We assess the performance of the proposed DC-PFL alongside baselines on datasets CIFAR-10 and CIFAR-100^1. ... 1https://www.cs.toronto.edu/ kriz/cifar.html |
| Dataset Splits | Yes | Furthermore, the data from each client is partitioned into three distinct subsets: training, evaluation, and testing, with an 8:1:1 allocation ratio. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | Yes | We optimize FL hyperparameters through an extensive grid search by adjusting the batch size for local training from {32, 64, 128, 256, 512} and the number of local training epochs from {1, 10, 30, 50, 100}. We utilize the SGD optimizer with a fixed learning rate (η) of 0.01 for both local training and global classifier training. The total number of communication rounds (T) is set to 100 on CIFAR-10 and to 500 on CIFAR100 to ensure convergence across all algorithms. |