FedFA: Federated Feature Augmentation

Authors: Tianfei Zhou, Ender Konukoglu

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We offer both theoretical and empirical justifications to verify the effectiveness of FEDFA. Our code is available at https://github.com/tfzhou/Fed FA. ... Empirically, we demonstrate that FEDFA (1) works favorably with extremely small local datasets; (2) shows remarkable generalization performance to unseen test clients outside of the federation; (3) outperforms traditional data augmentation techniques by solid margins, and can complement them quite well in the federated learning setup.
Researcher Affiliation Academia Tianfei Zhou & Ender Konukoglu Computer Vision Lab, ETH Zurich
Pseudocode Yes In Appendix A, we provide detailed descriptions of FEDFA in Algorithm 1 and FFA in Algorithm 2.
Open Source Code Yes Our code is available at https://github.com/tfzhou/Fed FA.
Open Datasets Yes We conduct extensive experiments on five datasets: Office-Caltech 10 (Gong et al., 2012), Domain Net(Peng et al., 2019) and Prostate MRI(Liu et al., 2020b) for validation of FEDFA in terms of feature-shift non-IID, as well as larger-scale datasets CIFAR-10 (Krizhevsky & Hinton, 2009) and EMNIST (Cohen et al., 2017) for cases of label distribution and data size heterogeneity, respectively.
Dataset Splits Yes The dataset splits of Office-Caltech 10 and Domain Net in our experiments are summarized in Table 11 and Table 12, respectively. ... The dataset splits of Prostate MRI used in our experiments are summarized in Table 13.
Hardware Specification Yes All our experiments are ran with a single GPU (we used NVIDIA Ge Force RTX 2080 Ti with a 11G memory), thus can be reproduced by researchers with computational constraints as well.
Software Dependencies No The paper states 'We use Py Torch to implement FEDFA and other baselines.' but does not specify the version number for PyTorch or any other software component, which is required for reproducibility.
Experiment Setup Yes Following FEDBN (Li et al., 2020b), we adopt Alex Net (Krizhevsky et al., 2017) on Office-Caltech 10 and Domain Net, using the SGD optimizer with learning rate 0.01 and batch size 32. Following FEDHARMO (Jiang et al., 2022), we employ U-Net (Ronneberger et al., 2015) on Prostate MRI using Adam as the optimizer with learning rate 1e-4 and batch size 16. The communication rounds are 400 for Office-Caltech 10 and Domain Net, and 500 for Prostate MRI, with the number of local update epoch setting to 1 in all cases. ... In Table 9, we summarize the configuration of our experiments for each of the datasets.