Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Emergence of Invariance and Disentanglement in Deep Representations

Authors: Alessandro Achille, Stefano Soatto

JMLR 2018 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we perform several experiments with realistic architectures and datasets to validate the assumptions underlying our claims. In particular, we show that using the information in the weights to measure the complexity of a deep neural network (DNN), rather than the number of its parameters, leads to a sharp and theoretically predicted transition between overfitting and underfitting regimes for random labels, shedding light on the questions of Zhang et al. (2017).
Researcher Affiliation Academia Alessandro Achille EMAIL Department of Computer Science University of California Los Angeles, CA 90095, USA Stefano Soatto EMAIL Department of Computer Science University of California Los Angeles, CA 90095, USA
Pseudocode No The paper describes theoretical concepts and relationships, and references optimization algorithms like SGD and Variational Dropout, but does not present any pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing its source code, nor does it provide a link to a code repository.
Open Datasets Yes Figure 1: (Left) The Alex Net model of Zhang et al. (2017) achieves high accuracy (red) even when trained with random labels on CIFAR-10. ... To test this algorithm, we add random occlusion nuisances to MNIST digits (Figure 4). ... To generate the image ˆx given the 8 8 96 representation z computed by the classifier, we use a similar structure to DCGAN (Radford et al., 2016), namely z conv 256 Conv T 256s2 Conv T 128s2 conv 3 tanh, where Conv T 256s2 denotes a transpose convolution with 256 feature maps and stride 2. All convolutions have a batch normalization layer before the activations. Finally, the discriminator network is given by ˆx conv 64s2 conv 128s2 Conv T 256s2 conv 1 sigmoid. Here, all convolutions use batch normalization followed by Leacky Re LU activations. In this experiment, we use Gaussian multiplicative noise which is slightly more stable during training (Appendix B). To stabilize the training of the GAN, we found useful to (1) scale down the reconstruction error term in the loss function and (2) slowly increase the weight of the reconstruction error up to the desired value during training.
Dataset Splits Yes In particular, we train a small version of Alex Net on a 28 28 central crop of CIFAR-10 with completely random labels. ... We train with N = 10000 random labels, η = 0.05 and different values of β log-uniformly spaced in [10 2, 102]. ... The cluttered MNIST dataset is generated by adding ten 4 4 squares uniformly at random on the digits of the MNIST dataset (Le Cun et al., 1998).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU, CPU models) used for running the experiments.
Software Dependencies No The paper mentions using SGD, discusses the reparametrization trick (Kingma and Welling, 2014), and refers to deep learning architectures like Alex Net, Res Nets, and CNNs, but does not provide specific version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used.
Experiment Setup Yes We train with learning rates η {0.02, 0.005} and select the best performing network of the two. Generally, we found that a higher learning rate is needed to overfit when the number of training samples N is small, while a lower learning rate is needed for larger N. We train with SGD with momentum 0.9 for 360 epochs reducing the learning rate by a factor of 10 every 140 epochs. We use a large batch-size of 500 to minimize the noise coming from SGD. No weight decay or other regularization methods are used.