PerFedMask: Personalized Federated Learning with Optimized Masking Vectors

Authors: Mehdi Setayesh, Xiaoxiao Li, Vincent W.S. Wong

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results based on CIFAR-10 and CIFAR-100 datasets show that the proposed Per Fed Mask algorithm provides a higher test accuracy after fine-tuning and lower average number of trainable parameters when compared with six existing state-of-the-art FL algorithms in the literature. The codes are available at https://github.com/Mehdi Set/Per Fed Mask.
Researcher Affiliation Academia Mehdi Setayesh, Xiaoxiao Li, and Vincent W.S. Wong Department of Electrical and Computer Engineering The University of British Columbia {setayeshm,xiaoxiao.li,vincentw}@ece.ubc.ca
Pseudocode Yes Algorithm 2 in Appendix B describes the Device Local Update function based on (1) and (2). (...) Algorithm 1 summarizes the training procedure of Per Fed Mask.
Open Source Code Yes The codes are available at https://github.com/Mehdi Set/Per Fed Mask.
Open Datasets Yes We conduct our experiments on CIFAR-10 and CIFAR-100 image classification tasks4. (...) 4We also use Alex Net on Domain Net dataset (Li et al., 2021) and provide the results in Appendix I to show the performance under feature non-IID configuration.
Dataset Splits Yes Each device has 450 training data samples, 50 validation data samples, and 100 test data samples.
Hardware Specification No The paper does not explicitly specify the hardware components (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies Yes We perform the experiments using Py Torch library (Py Torch, 2022) in Python 3.7.
Experiment Setup Yes The batch size is set to 50. (...) For all the experiments, the learning rate starts with 0.1 and is decayed by a factor of 0.1 in communication round t { 1/4T, 1/2T, 3/4T}. (...) We fix the product of the local epochs E and the maximum number of communication rounds T to 320.