What Do Neural Networks Learn When Trained With Random Labels?

Authors: Hartmut Maennel, Ibrahim M. Alabdulmohsin, Ilya O. Tolstikhin, Robert Baldock, Olivier Bousquet, Sylvain Gelly, Daniel Keysers

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We study deep neural networks (DNNs) trained on natural image data with entirely random labels. ... These effects are studied in several network architectures, including VGG16 and Res Net18, on CIFAR10 and Image Net.
Researcher Affiliation Industry Hartmut Maennel? hartmutm@google.com Ibrahim Alabdulmohsin? ibomohsin@google.com Ilya Tolstikhin tolstikhin@google.com Robert J. N. Baldock rbaldock@google.com Olivier Bousquet obousquet@google.com Sylvain Gelly sylvaingelly@google.com Daniel Keysers keysers@google.com Google Research, Brain Team Zürich, Switzerland
Pseudocode No The paper describes its methods in prose and mathematical derivations, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing source code for the described methodology or a link to a code repository.
Open Datasets Yes These effects are studied in several network architectures, including VGG16 and Res Net18, on CIFAR10 [31] and Image Net ILSVRC-2012 [14].
Dataset Splits No The paper describes using 'training' and 'test' subsets (e.g., in Table 1) and mentions '20k CIFAR10 examples' for pre-training and '25k CIFAR10 examples' for fine-tuning, but it does not specify explicit validation dataset splits (e.g., percentages or counts dedicated to a validation set, nor does it refer to standard validation splits).
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., GPU models, CPU types, or TPU versions) used for conducting the experiments.
Software Dependencies No The paper mentions general tools and architectures (e.g., VGG16, Res Net18) but does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with their respective versions).
Experiment Setup Yes We conduct experiments verifying that these effects are present in several network architectures, including VGG16 [46] and Res Net18-v2 [22], on CIFAR10 [31] and Image Net ILSVRC-2012 [14], across a range of hyper-parameters, such as the learning rate, initialization, number of training iterations, width and depth. Experimental details are provided in Appendix A and B.