Understanding the unstable convergence of gradient descent

Authors: Kwangjun Ahn, Jingzhao Zhang, Suvrit Sra

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We investigate this unstable convergence phenomenon from first principles, and discuss key causes behind it. We also identify its main characteristics, and how they interrelate based on both theory and experiments, offering a principled view toward understanding the phenomenon. (Abstract) and Example of unstable convergence for training CIFAR-10 with GD. (Figure 1)
Researcher Affiliation Academia 1Department of EECS, MIT, Cambridge, MA, USA 2Part of this work was done while Kwangjun Ahn was visiting the Simons Institute for the Theory of Computing, Berkeley, CA, USA. 3IIIS, Tsinghua University, Beijing, China.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes Experiment 1 (CIFAR-10 experiment). For this example, we follow the setting of the main experiment (Cohen et al., 2021) in their Section 3. Specifically, we use (full-batch) GD to train a neural network on 5, 000 examples from CIFAR-10 with the Cross Entropy loss
Dataset Splits No Experiment 1 (CIFAR-10 experiment). For this example, we follow the setting of the main experiment (Cohen et al., 2021) in their Section 3. Specifically, we use (full-batch) GD to train a neural network on 5, 000 examples from CIFAR-10 with the Cross Entropy loss, and the network is a fully-connected architecture with two hidden layers of width 200.
Hardware Specification No The paper does not contain specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Specifically, we use (full-batch) GD to train a neural network on 5, 000 examples from CIFAR-10 with the Cross Entropy loss, and the network is a fully-connected architecture with two hidden layers of width 200. We choose the step size η = 2/30 (Experiment 1, Section 3.2). Also, train the network with SGD with minibatch size of 32 and step size η = 2/100 (Experiment 7, Section 5).