Scaling Properties of Deep Residual Networks

Authors: Alain-Sam Cohen, Rama Cont, Alain Rossier, Renyuan Xu

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. Our main contributions are twofold. Using the methodology described in Section 2, we design detailed numerical experiments to study the scaling of trained network weights across a range of Res Net architectures and datasets, showing the existence of at least three different scaling regimes, none of which correspond to (3).
Researcher Affiliation Collaboration 1 InstaDeep 2Mathematical Institute, University of Oxford. Correspondence to: Alain Rossier <rossier@maths.ox.ac.uk>.
Pseudocode No The paper describes mathematical formulations and experimental procedures but does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Our code is publicly available at https://github.com/instadeepai/scaling-resnets.
Open Datasets Yes The second dataset is a low-dimensional embedding of the MNIST handwritten digits dataset (Le Cun et al., 1998). We train our residual networks at depths ranging from Lmin = 8 to Lmax = 121 on the CIFAR-10 (Krizhevsky et al.) dataset.
Dataset Splits No The paper does not explicitly provide details about training/validation/test dataset splits with specific percentages, sample counts, or citations to predefined splits. It mentions training on datasets like MNIST and CIFAR-10, but the splitting methodology is not detailed.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running its experiments.
Software Dependencies No The paper does not list specific software components with their version numbers (e.g., Python 3.8, PyTorch 1.9) that would be needed to reproduce the experiment.
Experiment Setup Yes The weights are updated by stochastic gradient descent (SGD) on the unregularized mean-squared loss using batches of size B and a constant learning rate η. We perform SGD updates until the loss falls below ϵ, or when the maximum number of updates Tmax is reached. We repeat the experiments for several depths L varying from Lmin to Lmax. All the hyperparameters are given in Appendix A. Appendix A specifies: Synthetic Dataset: N = 10000, Lmax = 10000, B = 500, η = 0.05, ϵ = 10 4, Tmax = 100000. MNIST Dataset: N = 60000, Lmax = 10000, B = 500, η = 0.1, ϵ = 10 4, Tmax = 200000. CIFAR-10 Dataset: B = 128, η = 0.01, ϵ = 10 3, Tmax = 100000, SGD with momentum 0.9, weight decay 10 4.