Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Order Optimal One-Shot Distributed Learning
Authors: Arsalan Sharifnassab, Saber Salehkaleybar, S. Jamaloddin Golestani
NeurIPS 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluated the performance of MRE-C-log on two learning tasks and compared with the averaging method (AVGM) in [Zhang et al., 2012]. |
| Researcher Affiliation | Academia | Arsalan Sharifnassab, Saber Salehkaleybar, S. Jamaloddin Golestani Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran |
| Pseudocode | No | The algorithms (Multi-Resolution Estimator, MRE-C-log) are described step-by-step in natural language within the text, but no formal pseudocode block or algorithm figure is provided. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper states that samples are 'generated based on a linear model' or 'randomly drawn from { 1, 1}', indicating synthetic data, and does not provide access information or citations for a publicly available dataset. |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., percentages or counts for training, validation, and test sets). It mentions '100 instances' but this refers to repetitions of the experiment, not data splits. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory specifications, or type of computing cluster) used for running its experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies, libraries, or programming language versions used for implementing or running the experiments. |
| Experiment Setup | Yes | The paper specifies experimental setup details such as the problem types (ridge regression, logistic regression), dimensionality (d = 2), number of samples per machine (n = 1), and the loss function used: f(θ) = (θT X Y )2 + 0.1 θ 2 2 for ridge regression. |