Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning

Authors: Mehdi Sajjadi, Mehran Javanmardi, Tolga Tasdizen

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed method on several benchmark datasets.
Researcher Affiliation Academia Department of Electrical and Computer Engineering University of Utah {mehdi, mehran, tolga}@sci.utah.edu
Pseudocode No The paper describes the proposed method and loss functions but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper mentions using existing frameworks (cuda-convnet [37] and sparse convolutional networks [38, 39]) but does not state that the code for the proposed methodology is openly provided.
Open Datasets Yes We show the effect of the proposed unsupervised loss functions using Conv Nets on MNIST [2], CIFAR10 and CIFAR100 [34], SVHN [35], NORB [36] and ILSVRC 2012 challenge [5].
Dataset Splits Yes We randomly select 10 samples from each class (total of 100 labeled samples). We use all available training data as the unlabeled set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions using 'cuda-convnet' and 'sparse convolutional networks' frameworks but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes In Eq. 1, we set n to be 4 for experiments conducted using cuda-convnet and 5 for experiments performed using sparse convolutional networks.