Heterogeneous Personalized Federated Learning by Local-Global Updates Mixing via Convergence Rate
Authors: Meirui Jiang, Anjie Le, Xiaoxiao Li, Qi Dou
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We have theoretically analyzed the convergence of our method in the over-parameterize regime, and experimentally evaluated our method on five datasets. These datasets present heterogeneous data features in natural and medical images. With comprehensive comparison to existing state-of-the-art approaches, our LG-Mix has consistently outperformed them across all datasets (largest accuracy improvement of 5.01%), demonstrating the outstanding efficacy of our method for model personalization. |
| Researcher Affiliation | Academia | Meirui Jiang Department of Computer Science and Engineering The Chinese University of Hong Kong mrjiang@cse.cuhk.edu.hk Anjie Le Department of Computer Science and Engineering The Chinese University of Hong Kong ajle@cuhk.edu.hk Xiaoxiao Li Department of Electrical and Computer Engineering The University of British Columbia xiaoxiao.li@ece.ubc.ca Qi Dou Department of Computer Science and Engineering The Chinese University of Hong Kong qidou@cuhk.edu.hk |
| Pseudocode | Yes | Algorithm 1: LG-Mix: Local-global updates mixing algorithm |
| Open Source Code | Yes | Code is available at https://github.com/med-air/Hetero PFL. |
| Open Datasets | Yes | Digits-5 (Zhou et al., 2020; Li et al., 2021c) ... Office-Caltech10 (Gong et al., 2012) ... Domain Net (Peng et al., 2019) ... Camelyon17 (Bandi et al., 2018) ... Retinal dataset which contains retinal fundus images from 6 different sources (Fumero et al., 2011; Sivaswamy et al., 2015; Almazroa et al., 2018; Orlando et al., 2020). |
| Dataset Splits | Yes | For all datasets, we take each data source as one client and split the data of each client into train, validation, and testing sets with a ratio of 0.6, 0.2, and 0.2. |
| Hardware Specification | Yes | The GPU we used for training is Ge Force RTX 2080 Ti. |
| Software Dependencies | No | The paper mentions software like "Py Torch" but does not provide specific version numbers for software dependencies. |
| Experiment Setup | Yes | We use the SGD optimizer with a learning rate of 0.01 and Cross Entropy loss for classification tasks and use Adam optimizer with learning rate of 1e 3 with β = (0.9, 0.99), dice loss (Milletari et al., 2016) for the segmentation task. ... The total number of training rounds is 100 with a local update epoch of 1. ... All input images are resized to 28 28. |