Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent

Authors: Christopher M. De Sa, Satyen Kale, Jason D. Lee, Ayush Sekhari, Karthik Sridharan

NeurIPS 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A] (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A] (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A]
Researcher Affiliation Collaboration Ayush Sekhari EMAIL Satyen Kale EMAIL Jason D. Lee EMAIL Chris De Sa EMAIL Karthik Sridharan EMAIL Google Research, NY Princeton University and Google Research, Princeton Cornell University
Pseudocode No The paper describes algorithms (Gradient Descent, Stochastic Gradient Descent, Gradient Flow) mathematically, but does not present them in a structured pseudocode block or an explicitly labeled algorithm box.
Open Source Code No The checklist explicitly states 'N/A' for questions regarding code and datasets, and there is no statement in the paper about releasing source code for the described methodology.
Open Datasets No The paper is theoretical and does not report empirical experiments with datasets. The checklist section 'If you ran experiments...' explicitly states '[N/A]' for questions related to data.
Dataset Splits No The paper does not report empirical experiments or use datasets, therefore, there is no mention of training/validation/test dataset splits.
Hardware Specification No The paper does not report any empirical experiments, and the checklist section 'If you ran experiments...' explicitly states '[N/A]' for questions related to the total amount of compute and resources used, indicating no hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not describe any empirical experiments; therefore, it does not list specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not report any empirical experiments, thus no experimental setup details, such as hyperparameters or training settings, are provided.