Factorized-FL: Personalized Federated Learning with Parameter Factorization & Similarity Matching

Authors: Wonyong Jeong, Sung Ju Hwang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively validate our method on both label and domain-heterogeneous settings, on which it outperforms the state-of-the-art personalized federated learning methods. The code is available at https://github.com/wyjeong/Factorized-FL. 5 Experiment
Researcher Affiliation Academia Wonyong Jeong Graduate School of AI KAIST, Seoul, South Korea wyjeong@kaist.ac.kr Sung Ju Hwang Graduate School of AI KAIST, Seoul, South Korea sjhwang82@kaist.ac.kr
Pseudocode Yes As for the full training procedure, please see the pseudo-code of the algorithm in the supplementary file (Section A).
Open Source Code Yes The code is available at https://github.com/wyjeong/Factorized-FL.
Open Datasets Yes Datasets (1) Label Heterogeneous Scenario: we use CIFAR-10 [11] and SVHN [21] datasets... (2) Domain Heterogeneous Scenario: we use CIFAR-100 datasets [11]
Dataset Splits No The paper mentions splitting datasets into partitions for clients and using test accuracy, but it does not explicitly provide the specific train/validation/test dataset splits (e.g., percentages or sample counts) used for reproduction.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or cloud computing instances used for the experiments.
Software Dependencies No The paper does not provide specific software dependencies or their version numbers (e.g., 'Python 3.x', 'PyTorch x.x') required to replicate the experiments.
Experiment Setup Yes Top (label heterogeneous scenario): we train 20 clients on each dataset (CIFAR-10 & SVHN) for 250 training iterations (E=5, R=50). Bottom (domain & label heterogeneous scenario): We train 20 clients for 500 training iterations (E=5,R=100)...