Federated Learning with Extremely Noisy Clients via Negative Distillation
Authors: Yang Lu, Lin Chen, Yonggang Zhang, Yiliang Zhang, Bo Han, Yiu-ming Cheung, Hanzi Wang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To verify the efficacy of Fed Ned, we conduct extensive experiments under various settings, demonstrating that Fed Ned can consistently outperform baselines and achieve state-of-the-art performance. |
| Researcher Affiliation | Academia | 1Fujian Key Laboratory of Sensing and Computing for Smart City, School of Informatics, Xiamen University, China 2Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, China 3Department of Computer Science, Hong Kong Baptist University, Hong Kong, China |
| Pseudocode | Yes | Algorithm 1: Federated Negative Distillation |
| Open Source Code | No | The paper does not provide a statement about the release of open-source code or a link to a code repository. |
| Open Datasets | Yes | In the experiments, we adopt CIFAR-10 and CIFAR100 (Krizhevsky and Hinton 2009) to verify the efficacy of the proposed method... We use 128 images from Image Net (Russakovsky et al. 2015) as DU for training CIFAR-100. |
| Dataset Splits | No | The paper mentions using "the official testing data split by the benchmark" for the test set, but it does not explicitly state details for training, validation, or their respective splits. |
| Hardware Specification | Yes | All experiments are run by Py Torch on two NVIDIA Ge Force RTX 3090 GPUs. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify a version number or other software dependencies with versions. |
| Experiment Setup | Yes | For local training, the batch size is set at 32. We use SGD with a learning rate 0.05 as the optimizer for optimization processes. The threshold λ for the identification of EN client is set at 0.12. By default, we run 100 communication rounds to present the experimental results. The total number of clients is set at 20, and an active client ratio 50% is maintained in each round. |