How Does Batch Normalization Help Optimization?
Authors: Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, Aleksander Madry
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide an empirical demonstration of these findings as well as their theoretical justification. To this end, we start by investigating the connection between ICS and Batch Norm. Specifically, we consider first training a standard VGG [26] architecture on CIFAR-10 [15] with and without Batch Norm. As expected, Figures 1(a) and (b) show a drastic improvement, both in terms of optimization and generalization performance, for networks trained with Batch Norm layers. |
| Researcher Affiliation | Academia | Shibani Santurkar MIT shibani@mit.edu Dimitris Tsipras MIT tsipras@mit.edu Andrew Ilyas MIT ailyas@mit.edu Aleksander M adry MIT madry@mit.edu |
| Pseudocode | No | No pseudocode or algorithm blocks are present in the paper. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | Specifically, we consider first training a standard VGG [26] architecture on CIFAR-10 [15] with and without Batch Norm. |
| Dataset Splits | No | The paper mentions training and testing on CIFAR-10 but does not specify the exact training/validation/test splits used for their experiments in the main text or Appendix A (which only describes noise injection details). |
| Hardware Specification | No | The paper does not specify the exact hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any particular software dependencies with version numbers. |
| Experiment Setup | Yes | Standard, LR=0.1 Standard + Batch Norm, LR=0.1 Standard, LR=0.5 Standard + Batch Norm, LR=0.5 (from Figure 1, illustrating specific learning rates used in experiments). |