Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Scaling-Up Robust Gradient Descent Techniques
Authors: Matthew J. Holland7694-7701
AAAI 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we study the ef๏ฌciency and robustness of the proposed algorithm and its key competitors in a tightly controlled simulated setting (section ), verifying a substantial improvement in the cost-performance tradeoff, robustness to heavy-tailed data, and performance that scales well to higher dimensions. |
| Researcher Affiliation | Academia | Matthew J. Holland Osaka University EMAIL |
| Pseudocode | Yes | Algorithm 1 Robust divide and conquer archetype; DC-SGD [Zn, w0; k]. |
| Open Source Code | Yes | Repository: https://github.com/feedbackward/sgd-roboost |
| Open Datasets | No | The paper describes a simulated experimental setup ('we provide the learner with random losses of the form L(w; Z) = ( w w , X +E)2/2') rather than using a publicly available dataset with concrete access information. |
| Dataset Splits | No | The paper describes a simulated setting and does not provide specific train/validation/test dataset split information for a pre-existing dataset. |
| Hardware Specification | No | The paper states 'Complete details of the experimental setup are provided in the supplementary materials' but does not specify any hardware details like GPU/CPU models or specific machine configurations in the main text. |
| Software Dependencies | No | The paper states 'Complete details of the experimental setup are provided in the supplementary materials' but does not list any specific software dependencies with version numbers in the main text. |
| Experiment Setup | No | The paper states 'Complete details of the experimental setup are provided in the supplementary materials' and 'All detailed settings are in the supplementary materials.' Therefore, the main text does not contain specific experimental setup details like hyperparameters or training configurations. |