Predicting the Generalization Gap in Deep Networks with Margin Distributions
Authors: Yiding Jiang, Dilip Krishnan, Hossein Mobahi, Samy Bengio
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On the CIFAR-10 and the CIFAR-100 datasets, our proposed measure correlates very strongly with the generalization gap. In addition, we find the following other factors to be of importance: normalizing margin values for scale independence, using characterizations of margin distribution rather than just the margin (closest distance to decision boundary), and working in log space instead of linear space (effectively using a product of margins rather than a sum). Our measure can be easily applied to feedforward deep networks with any architecture and may point towards new training loss functions that could enable better generalization. |
| Researcher Affiliation | Industry | Yiding Jiang , Dilip Krishnan, Hossein Mobahi, Samy Bengio Google AI {ydjiang, dilipkay, hmobahi, bengio}@google.com |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The trained models and relevant Tensorflow Abadi et al. (2016) code to compute margin distributions are released at https://github.com/google-research/google-research/tree/master/demogen |
| Open Datasets | Yes | On the CIFAR-10 and the CIFAR-100 datasets, our proposed measure correlates very strongly with the generalization gap. |
| Dataset Splits | Yes | We use 90/10 split, fit the linear model with the training pool, and measure R2 on the held out pool. |
| Hardware Specification | No | The paper does not provide any specific hardware specifications (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Tensorflow Abadi et al. (2016)' but does not specify a version number for TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | Using the CIFAR-10 dataset, we train 216 nine-layer convolutional networks with different settings of hyperparameters and training techniques. We apply weight decay and dropout with different strengths; we use networks with and without batch norm and data augmentation; we change the number of hidden units in the hidden layers. [...] All networks are trained by SGD with momentum. Further details are provided in the supplementary material (Sec. 6). |