FedWon: Triumphing Multi-domain Federated Learning Without Normalization

Authors: Weiming Zhuang, Lingjuan Lyu

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experimentation on five datasets and five models, our comprehensive experimental results demonstrate that Fed Won surpasses both Fed Avg and the current state-of-the-art method (Fed BN) across all experimental setups, achieving notable accuracy improvements of more than 10% in certain domains.
Researcher Affiliation Industry Weiming Zhuang Sony AI weiming.zhuang@sony.com Lingjuan Lyu Sony AI lingjuan.lv@sony.com
Pseudocode Yes Listing 1 provides the implementation of WSConv in Py Torch.
Open Source Code No The source code will be released.
Open Datasets Yes We conduct experiments for multi-domain FL using three datasets: Digits-Five (Li et al., 2021), Office-Caltech-10 (Gong et al., 2012), and Domain Net (Peng et al., 2019). Digits-Five consists of five sets of 28x28 digit images, including MNIST (Le Cun et al., 1998), SVHN (Netzer et al., 2011), USPS (Hull, 1994), Synth Digits (Ganin & Lempitsky, 2015), MNIST-M (Ganin & Lempitsky, 2015);
Dataset Splits No The paper does not explicitly provide details about a validation dataset split (percentages or absolute counts) separate from training and test sets.
Hardware Specification Yes We implement Fed Won using Py Torch (Paszke et al., 2017) and run experiments on a cluster of eight NVIDIA T4 GPUs.
Software Dependencies No The paper mentions "Py Torch (Paszke et al., 2017)" but does not specify a version number or provide version numbers for other key software libraries or dependencies.
Experiment Setup Yes We use cross-entropy loss and stochastic gradient optimization (SGD) as the optimizer with learning rates tuned over the range of [0.001, 0.1] for all methods.