Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

One-Shot Federated Learning: Theoretical Limits and Algorithms to Achieve Them

Authors: Saber Salehkaleybar, Arsalan Sharifnassab, S. Jamaloddin Golestani

JMLR 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6. Experiments We evaluated the performance of MRE-C on two learning tasks and compared with the averaging method (AVGM) in (Zhang et al., 2012). ... In Fig. 3, the average of θˆ − θ is computed over 100 instances for the different number of machines in the range [104, 106]. Both experiments suggest that the average error of MRE-C keep decreasing as the number of machines increases.
Researcher Affiliation Academia Saber Salehkaleybar EMAIL Arsalan Sharifnassab EMAIL S. Jamaloddin Golestani EMAIL Department of Electrical Engineering Sharif University of Technology Tehran, Iran
Pseudocode Yes Algorithm 1: MRE-C algorithm // Constructing each sub-signal at machine i 1 obtain θi according to (10). 2 s the closest point in grid G to θi. ...
Open Source Code Yes The source code of MRE-C algorithm is publicly available at https: //github.com/sabersalehk/MRE_C.
Open Datasets No The paper describes how synthetic data was generated for the experiments (e.g., "each sample (X, Y ) is generated based on a linear model Y = XT θ +E, where X, E, and θ are sampled from N(0, Id d), N(0, 0.01), and uniform distribution over [0, 1]d, respectively."), but does not specify the use of any publicly available or open datasets with concrete access information.
Dataset Splits No The paper mentions experimental parameters like "d = 2" and "n = 1" (samples per machine) and that results are "computed over 100 instances". However, it does not provide specific details regarding train/test/validation dataset splits, percentages, or methodology for partitioning the data.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory configurations used for running the experiments.
Software Dependencies No The paper does not specify any ancillary software dependencies (e.g., libraries, frameworks, or solvers) with their version numbers that would be needed to replicate the experiments.
Experiment Setup Yes In both experiments, we consider a two dimensional domain (d = 2) and assumed that each machine has access to one sample (n = 1). In Fig. 3, the average of θˆ − θ is computed over 100 instances for the different number of machines in the range [104, 106]. ... We consider square loss function with l2 norm regularization: f(θ) = (XT θ Y )2 + 0.1 θ 2.