Personalized Federated Learning With a Graph

Authors: Fengwen Chen, Guodong Long, Zonghan Wu, Tianyi Zhou, Jing Jiang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on traffic and image benchmark datasets can demonstrate the effectiveness of the proposed method.
Researcher Affiliation Academia 1Australian Artificial Intelligence Institute, FEIT, University of Technology Sydney 2University of Washington, Seattle 3University of Maryland, College Park
Pseudocode Yes Algorithm 1 Structural Federated Learning Server.
Open Source Code Yes All implementation codes are available on Github8. https://github.com/dawenzi098/SFL-Structural-Federated Learning
Open Datasets Yes We used four traffic datasets, METR-LA, PEMSBAY, PEMS-D4, and PEMS-D8 to observe the performance of the SFL in different real-world scenarios. We apply the same data pre-processing procedures as described in [Wu et al., 2019]. For the image datasets, we applied the same train/test splits as in the work 9. We artificially partitioned the CIFAR-10 with parameter k(shards) to control the level of non-IID data.
Dataset Splits Yes We also apply Z-score normalization to the inputs and separate the training-set, validation-set, and test-set in a 70% 20% and 10% ratio.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU models, CPU types, or memory) used for running the experiments.
Software Dependencies No The paper mentions general software components like "pure RNN" and "Res Net9" but does not specify their version numbers or other ancillary software dependencies with versions required for replication.
Experiment Setup Yes We employ SGD with the same learning rate as the optimizer for all training operations, use 128 for batch size, and the number of total communication rounded to 20.