Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Authors: Simon S. Du, Xiyu Zhai, Barnabas Poczos, Aarti Singh
ICLR 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 EXPERIMENTS In this section, we use synthetic data to corroborate our theoretical findings. |
| Researcher Affiliation | Academia | Simon S. Du Machine Learning Department Carnegie Mellon University EMAIL Xiyu Zhai Department of EECS Massachusetts Institute of Technology EMAIL Barnab as Pocz os Machine Learning Department Carnegie Mellon University EMAIL Aarti Singh Machine Learning Department Carnegie Mellon University EMAIL |
| Pseudocode | No | The paper does not contain any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm', nor are there structured steps formatted like code. |
| Open Source Code | No | The paper does not contain any statement about releasing source code for the described methodology, nor does it provide any links to a code repository. |
| Open Datasets | No | We use synthetic data to corroborate our theoretical findings. We uniformly generate n = 1000 data points from a d = 1000 dimensional unit sphere and generate labels from a one-dimensional standard Gaussian distribution. |
| Dataset Splits | No | The paper states 'We use synthetic data to corroborate our theoretical findings.' but does not specify any dataset splits (training, validation, test) or cross-validation setup. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instances) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | For all experiments, we run 100 epochs of gradient descent and use a fixed step size. |