Simple and Effective Stochastic Neural Networks

Authors: Tianyuan Yu, Yongxin Yang, Da Li, Timothy Hospedales, Tao Xiang3252-3260

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments are carried out to evaluate the efficacy of the proposed framework in four applications: neural network pruning, adversarial attack defense, learning with label noise, and model calibration.
Researcher Affiliation Collaboration Tianyuan Yu1, Yongxin Yang1, Da Li2,3, Timothy Hospedales2,3, Tao Xiang1 1Center for Vision, Speech and Signal Processing, University of Surrey 2School of Informatics, University of Edinburgh 3Samsung AI Centre, Cambridge
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an unambiguous statement or direct link to the source code for the methodology described in the paper. It mentions “1https://github.com/BVLC/caffe/tree/master/examples/mnist”, but this is a link to a third-party framework’s example, not the authors’ own code for their method.
Open Datasets Yes We follow the architecture/dataset combinations used in most recent neural network pruning studies, including Le Net-5-Caffe1 network on MNIST (Le Cun et al. 1998), VGG-16 (Simonyan and Zisserman 2015) on CIFAR10 (Krizhevsky and Hinton 2009) and a variant of VGG 16 on CIFAR100.
Dataset Splits Yes T is optimized with respect to validation negative log likelihood.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running its experiments.
Software Dependencies No The paper mentions “Le Net-5-Caffe” but does not specify version numbers for Caffe or any other software dependencies used in the experiments.
Experiment Setup Yes We set the regularizer weight ω (Eq. 9) and margin b (Eq. 5) as 0.01 and 4 in all experiments, respectively.