Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Byzantine Resilient and Fast Federated Few-Shot Learning

Authors: Ankit Pratap Singh, Namrata Vaswani

ICML 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the Figure 1 we plot Error vs Iteration where Error = SDF (U ,U) r . We report mean SDF over 100 Monte Carlo runs. We compare Byz-Fed-Alt GDmin-Learn (GMo M) with the baseline algorithm Alt GDmin-Learn (Mean) in the no attack setting. We also provide results for Byz Fed-Alt GDmin-Learn (GM) for both values of Lbyz. All these are compared in Figure 1. We also compare the initialization errors in Figure 1 Table.
Researcher Affiliation Academia 1Department of Electrical and Computer Engineering, Iowa State University, Ames IA, USA. Correspondence to: Ankit Pratap Singh <EMAIL>.
Pseudocode Yes Algorithm 1 Few-Shot Learning via alt GDmin. Let M := (M M) 1M . [...] Algorithm 2 Byz-Alt GDmin-Learn: Initialization step. [...] Algorithm 3 Byz-Alt GDMin-Learn: Complete algorithm
Open Source Code No The paper does not include any explicit statement about releasing source code or a link to a code repository.
Open Datasets No The paper discusses generating synthetic data, e.g., "all the feature vectors for all the tasks are i.i.d. standard Gaussian" and refers to "Monte Carlo runs." It does not mention using publicly available datasets with concrete access information such as a link, DOI, or formal citation.
Dataset Splits No The paper mentions "Sample-split: Partition the data into 2T + 1 equal-sized disjoint sets" for the algorithm's iterations, but it does not provide specific details for a train/validation/test split of a fixed dataset (e.g., percentages, sample counts, or citations to predefined splits).
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper does not mention any specific software dependencies or their version numbers (e.g., Python 3.x, PyTorch x.x).
Experiment Setup Yes Algorithm 1 lists "Parameters: GD step size, η; Number of iterations, T". Theorem 2.1 specifies "η = 0.4/mσ 1 2 and T = Cκ2 log(1/ϵ)". Lemma 3.3 mentions "stepsize η 0.5/σ 1 2". Algorithm 2 lists "Parameters: Tpow, Tgm" and specifies the initialization step including "α C e mq Pk (yk)ℓ 2, with C = 9κ2µ2".