On Bridging Generic and Personalized Federated Learning for Image Classification

Authors: Hong-You Chen, Wei-Lun Chao

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate FED-ROD on multiple datasets under various non-IID settings. FED-ROD consistently outperforms existing generic and personalized FL algorithms in both setups.
Researcher Affiliation Academia Hong-You Chen The Ohio State University, USA Wei-Lun Chao The Ohio State University, USA
Pseudocode Yes In algorithm 1 and algorithm 2, we provide pseudocode of our FED-ROD algorithm. Algorithm 1: FED-ROD (linear) (Federated Robust Decoupling) [...] Algorithm 2: FED-ROD (hyper) (Federated Robust Decoupling)
Open Source Code Yes We also provide our code in https://github.com/hongyouc/Fed-RoD.
Open Datasets Yes Datasets, models, and settings. We use CIFAR-10/100 (Krizhevsky et al., 2009) and Fashion MNIST (FMNIST) (Xiao et al., 2017).
Dataset Splits Yes The best personalized model after local training is selected for each client using a validation set.
Hardware Specification Yes We run our experiments on four Ge Force RTX 2080 Ti GPUs with Intel i9-9960X CPUs.
Software Dependencies No The paper mentions using "SGD optimizer" but does not specify software versions for libraries or frameworks (e.g., Python, PyTorch, TensorFlow, with version numbers).
Experiment Setup Yes We train every FL algorithm for 100 rounds, with 5 local epochs in each round. We initialize the model weights from normal distributions. As mentioned in (Li et al., 2020b), the local learning rate must decay along the communication rounds. We initialize it with 0.01 and decay it by 0.99 every round, similar to (Acar et al., 2021). Throughout the experiments, we use the SGD optimizer with weight decay 1e 5 and a 0.9 momentum. The mini-batch size is 40 (16 for EMNIST).