Federated Label-Noise Learning with Local Diversity Product Regularization
Authors: Xiaochen Zhou, Xudong Wang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The results show that Fed LNL improves the test accuracy of a trained model by up to 25.98%, compared with the state-of-the-art FL schemes that tackle label-noise issues.The performance of Fed LNL is evaluated with extensive experiments. The effectiveness of the alternating update method and LDP regularizer is first verified in an ablation study. The overall performance of Fed LNL is then evaluated in the case of limited local training data samples. |
| Researcher Affiliation | Academia | Xiaochen Zhou, Xudong Wang Shanghai Jiao Tong University xiaochenzhou@sjtu.edu.cn, wxudong@ieee.org |
| Pseudocode | No | The paper describes methods and processes verbally and with mathematical equations, but it does not include any formally labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about making its source code publicly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Three datasets are adopted in the experiments: CIFAR-10 (Krizhevsky, Hinton et al. 2009), CIFAR-100 (Krizhevsky, Hinton et al. 2009), and Clothing1M (Xiao et al. 2015)). |
| Dataset Splits | No | The paper provides the total number of samples for training and testing datasets (e.g., 'CIFAR-10 ... # samples 50,000 ... # samples 10,000 (test)'), but it does not specify a separate validation split or its size. |
| Hardware Specification | Yes | All the experiments are executed on a server with one i9-10900k CPU, one Ge Force RTX 3090 GPU, and 64 GB RAM. |
| Software Dependencies | Yes | The implementation of Fed LNL is based on Pytorch-1.7.0 (Paszke et al. 2019). |
| Experiment Setup | Yes | In Fed LNL, each client trains its local classifier via stochastic gradient descent (SGD) with a momentum of 0.9. The learning rate is set to 0.01. The batch size is set to 64. The number of local iterations is set to 3 and the total number of communication rounds between the clients devices and the central server is set to 300. Hyperparameter λ is set to 0.01. |