Knowledge-Aware Parameter Coaching for Personalized Federated Learning

Authors: Mingjian Zhi, Yuanguo Bi, Wenchao Xu, Haozhao Wang, Tianao Xiang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted over various datasets, which show that the proposed method can achieve better performance compared with the state-of-the-art baselines in terms of accuracy and convergence speed.
Researcher Affiliation Academia 1Northeastern University, China 2The Hong Kong Polytechnic University, Hong Kong, China 3Huazhong University of Science and Technology, China
Pseudocode Yes Algorithm 1: Parameter Coaching Process in the Server, Algorithm 2: Parameter Coaching Process in Client i
Open Source Code No The paper does not provide an explicit statement or link for the availability of its source code.
Open Datasets Yes Four public benchmark datasets are used to evaluate the proposed method, MNIST, FMNIST, CIFAR10 and CIFAR100.
Dataset Splits No The paper mentions training and test data but does not explicitly state a validation dataset split.
Hardware Specification No The paper does not provide specific hardware details used for running the experiments.
Software Dependencies No All the experiments are repeated over 3 runs in Pytorch. (No version specified)
Experiment Setup Yes The model is trained by K = 50 rounds on MNIST/FMNIST, K = 100 rounds on CIFAR10, and K = 200 rounds on CIFAR100. The local epochs for W and R are set to 5 and 1 for all cases. In addition, cross-entropy loss and stochastic gradient descent method are adopted to update the client parameters and relation cube, and the learning rates for W and R are both set to 0.01.