Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

FAST: A Lightweight Mechanism Unleashing Arbitrary Client Participation in Federated Learning

Authors: Zhe Li, Seyedsina Nabavirazavi, Bicheng Ying, Sitharama Iyengar, Haibo Yang

IJCAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that FAST significantly improves performance under ACP and high data heterogeneity. ... We perform extensive experiments on Fashion-MNIST [Xiao et al., 2017] and CIFAR-10 [Krizhevsky et al., 2009]...
Researcher Affiliation Collaboration 1Rochester Institute of Technology 2Florida International University 3Google Inc. EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1 Federated Average with Snapshot (FAST) Algorithm 2 Adaptive q in FAST
Open Source Code No The paper does not contain any explicit statements about releasing the source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes We perform extensive experiments on Fashion-MNIST [Xiao et al., 2017] and CIFAR-10 [Krizhevsky et al., 2009], considering various Non-IID degrees and utilizing the four distributions to simulate different client participation. ... We employ Fashion-MNIST [Xiao et al., 2017] and CIFAR-10 datasets [Krizhevsky et al., 2009] for image classification tasks, and we utilize the Shakespeare dataset [Caldas et al., 2018] for the next character prediction task.
Dataset Splits No The paper describes data partitioning for clients based on Non-IID degrees using Dirichlet distribution and client participation rates (10%), but it does not specify the standard training/validation/test splits of the datasets themselves (e.g., specific percentages or sample counts for Fashion-MNIST, CIFAR-10, or Shakespeare).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory, or cloud instances) used for running the experiments.
Software Dependencies No The paper mentions using 'Fed Lab [Zeng et al., 2023]' for data partitioning but does not specify its version or any other software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Initialize: model parameter x0, learning rate ηc, local update steps K, communication rounds R, snapshot step interval I (or probability q). ... Our FL system comprises 100 clients in total for Fashion-MNIST and CIFAR-10 and 139 clients for Shakespeare. In each round, only 10% clients are chosen to participate in training. ... We conduct a series of experiments to assess the performance under different λ.