FedSR: A Simple and Effective Domain Generalization Method for Federated Learning

Authors: A. Tuan Nguyen, Philip Torr, Ser Nam Lim

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results suggest that our method significantly outperforms relevant baselines in this particular problem.
Researcher Affiliation Collaboration A. Tuan Nguyen University of Oxford tuan@robots.ox.ac.uk Philip H. S. Torr University of Oxford philip.torr@eng.ox.ac.uk Ser-Nam Lim Meta AI Research sernamlim@fb.com
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes Our code will be released at https://github.com/atuannguyen/Fed SR.
Open Datasets Yes To evaluate our method, we perform experiments in four datasets (ranging from easy to more challenging) that are commonly used in the literature for domain generalization. Rotated MNIST [10], PACS [20], Office Home [40], Domain Net [33].
Dataset Splits Yes Following standard practice, we use 90% of available data as training data and 10% as validation data.
Hardware Specification Yes We train all our models with NVIDIA A100 GPUs from our AWS cluster.
Software Dependencies No The paper mentions software components and models like Resnet18, ResNet50, SGD, and ImageNet, but does not provide specific version numbers for any software libraries or frameworks used.
Experiment Setup Yes For the Rotated MNIST dataset... train our network for 500 epochs with stochastic gradient descent (SGD), using a learning rate of 0.001 and minibatch size 64, and report performance on the test domain after the last epoch. Each client performs 5 local optimization iterations within each communication round (E = 5). For the PACS datasets... Each local client uses stochastic gradient descent (SGD) (a total of 5000 iterations) with learning rate 0.01, momentum 0.9, minibatch size 64, and weight decay 5e 4.