An Alternative View: When Does SGD Escape Local Minima?

Authors: Bobby Kleinberg, Yuanzhi Li, Yang Yuan

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we observe that the loss surface of neural networks enjoys nice one point convexity properties locally, therefore our theorem helps explain why SGD works so well for neural networks. In this section, we explore the loss surfaces of modern neural networks, and show that they enjoy many nice one point convex properties.
Researcher Affiliation Academia 1Department of Computer Science, Cornell University 2Department of Computer Science, Princeton University.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the described methodology.
Open Datasets Yes We ran experiments on Resnet (He et al., 2016) (34 layers, 1.2M parameters), Densenet (Huang et al., 2016) (100 layers, 0.8M parameters) on cifar10 and cifar100, each for 5 trials with 300 epochs and different initializations.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) needed to reproduce the data partitioning. It mentions 'validation loss' but not how the splits were created.
Hardware Specification No The paper mentions support from 'Microsoft Azure research award and Amazon AWS research award' but does not specify any exact hardware details (e.g., specific GPU/CPU models, memory amounts) used for running experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes In these experiments, we have used the standard step size schedule (0.1 initially, 0.01 after epoch 150, and 0.001 after epoch 225). We ran experiments on Resnet... Densenet... each for 5 trials with 300 epochs and different initializations.