FedFixer: Mitigating Heterogeneous Label Noise in Federated Learning

Authors: Xinyuan Ji, Zhaowei Zhu, Wei Xi, Olga Gadyatskaya, Zilong Song, Yong Cai, Yang Liu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the effectiveness of Fed Fixer through extensive experiments on benchmark datasets. The results demonstrate that Fed Fixer can perform well in filtering noisy label samples on different clients, especially in highly heterogeneous label noise scenarios.
Researcher Affiliation Collaboration Xinyuan Ji1, 2, Zhaowei Zhu3, Wei Xi1*, Olga Gadyatskaya2, Zilong Song1, Yong Cai4, Yang Liu5* 1Xi an Jiaotong University 2Leiden University 3Docta.ai 4IQVIA Inc & California State University, Monterey Bay 5University of California, Santa Cruz
Pseudocode Yes Algorithm 1: Fed Fixer.
Open Source Code No The paper does not provide an explicit statement about releasing open-source code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes We evaluate different methods on MNIST (Le Cun et al. 1998), CIFAR-10 (Krizhevsky, Hinton et al. 2009), and Clothing1M (Xiao et al. 2015) datasets
Dataset Splits No The paper mentions using 'data division and noise model used in Fed Corr (Xu et al. 2022)' but does not explicitly provide specific training, validation, or test dataset splits (e.g., percentages or sample counts) within its own text.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions 'the local solver SGD' but does not specify other software dependencies with version numbers (e.g., Python version, specific deep learning frameworks like PyTorch or TensorFlow versions).
Experiment Setup Yes Implementation Details We evaluate different methods on MNIST (Le Cun et al. 1998), CIFAR-10 (Krizhevsky, Hinton et al. 2009), and Clothing1M (Xiao et al. 2015) datasets with specific rounds, model architectures, total clients, and the fraction of selected clients for each dataset, as summarized in Tab. 1. ... local epoch E = 5, batch size B = 32, and the local solver SGD. For our Fed Fixer method, we set β = 2 for CIFAR-10/MNIST, β = 1 for Clothing1M. The learning rate ζ is set to 1 2η for MNIST dataset and ζ = η for remaining datasets. Additionally, we set the beginning round of noisy sample filtering Ts = 10, hyperparameters γ = 1, and λ = 15 for all datasets.