Personalized Federated Learning via Feature Distribution Adaptation

Authors: Connor Mclaughlin, Lili Su

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive computer vision benchmarks, we demonstrate that our method can adjust to complex distribution shifts with significant improvements over current state-of-the-art in data-scarce settings. Our source code is available on Git Hub1. 5 Experiments
Researcher Affiliation Academia Connor J. Mc Laughlin, Lili Su Northeastern University, Boston, MA 02115 {mclaughlin.co,l.su}@northeastern.edu
Pseudocode Yes Algorithm 1: p Fed FDA
Open Source Code Yes Our source code is available on Git Hub1. 1https://github.com/cj-mclaughlin/p Fed FDA
Open Datasets Yes We consider image classification tasks and evaluate our method on four popular datasets. The EMNIST [4] dataset is for 62-class handwriting image classification. The CIFAR10/CIFAR100 [22] datasets are for 10 and 10-class color image classification. The Tiny Image Net [23] dataset is for 200-class natural image classification.
Dataset Splits Yes We split each client s data partition 80-20% between training and testing. For p Fed FDA, we use k = 2 cross-validation folds to estimate a single βi term for each client.
Hardware Specification Yes All experiments are implemented in Py Torch 2.1 [36] and were each trained with a single NVIDIA A100 GPU.
Software Dependencies Yes All experiments are implemented in Py Torch 2.1 [36] and were each trained with a single NVIDIA A100 GPU.
Experiment Setup Yes We train all algorithms with mini-batch SGD for E = 5 local epochs and R = 200 global rounds. We apply no data augmentation besides normalization into the range [ 1, 1]. For p Fed FDA, we use k = 2 cross-validation folds to estimate a single βi term for each client. Additional training details and hyperparameters for each baseline method are provided in Appendix C.2.