FedLF: Layer-Wise Fair Federated Learning
Authors: Zibin Pan, Chi Li, Fangchen Yu, Shuyi Wang, Haijin Wang, Xiaoying Tang, Junhua Zhao
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on different learning tasks and models demonstrate that Fed LF outperforms the SOTA FL algorithms in terms of accuracy and fairness. The source code is available at https://github.com/zibinpan/Fed LF. |
| Researcher Affiliation | Academia | 1 The School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China 2 The Shenzhen Institute of Artificial Intelligence and Robotics for Society 3 The Guangdong Provincial Key Laboratory of Future Networks of Intelligence 4 Shenzhen Research Institute of Big Data |
| Pseudocode | Yes | Algorithm 1: Layer-wise Fair Federated Learning (Fed LF) |
| Open Source Code | Yes | The source code is available at https://github.com/zibinpan/Fed LF. |
| Open Datasets | Yes | We evaluate the performance of algorithms on the public datasets Fashion MNIST (FMNIST) (Xiao, Rasul, and Vollgraf 2017) and CIFAR-10/100 (Krizhevsky and Hinton 2009), where the training and testing data have already been split. |
| Dataset Splits | No | The paper mentions 'training and testing data' and 'train' and 'test' sets explicitly but does not mention a 'validation' set or provide specific split percentages/counts for a three-way split. |
| Hardware Specification | No | The paper does not specify any particular hardware used for experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions 'Stochastic Gradient Descent (SGD)' and references other FL algorithms, but it does not list specific software or library names with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | We set the learning rate η {0.01, 0.05, 0.1} decay of 0.999 per round and choose the best performance of each method in comparison. We take the average of results in 5 runs with different random seeds. ... all clients use Stochastic Gradient Descent (SGD) on local datasets with local epoch E = 1. |