Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

FedCDA: Federated Learning with Cross-rounds Divergence-aware Aggregation

Authors: Haozhao Wang, Haoran Xu, Yichen Li, Yuan Xu, Ruixuan Li, Tianwei Zhang

ICLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on various models and datasets reveal our approach outperforms state-of-the-art aggregation methods. and 6 EVALUATION
Researcher Affiliation Academia 1S-Lab, Nanyang Technological University 2Zhejiang University 3Department of Computer Science, Huazhong University of Science and Technology 4Nanyang Technological University
Pseudocode Yes Algorithm 1 Fed CDA Algorithm
Open Source Code No The paper does not contain an explicit statement about releasing the source code or a link to a code repository for the methodology described.
Open Datasets Yes Datasets and Models: We consider three popular datasets in experiments: Fashion-MNIST (Xiao et al., 2017), CIFAR-10 (Krizhevsky et al., 2009) and CIFAR-100 (Krizhevsky et al., 2009)
Dataset Splits No The paper describes data partitioning for federated learning among clients (Shards and Dirichlet distribution) but does not specify explicit training, validation, and test dataset splits (e.g., 80/10/10 percentages or counts) or mention a separate validation set.
Hardware Specification Yes We implement the whole experiment in a simulation environment based on Py Torch 2.0 and 8 NVIDIA Ge Force RTX 3090 GPUs.
Software Dependencies Yes We implement the whole experiment in a simulation environment based on Py Torch 2.0 and 8 NVIDIA Ge Force RTX 3090 GPUs.
Experiment Setup Yes We set the local epoch to 20, batch size to 64, and learning rate to 1e 3. We employ SGD optimizer with momentum of 1e 4 and weight decay of 1e 5 for all methods and datasets.