MuProp: Unbiased Backpropagation For Stochastic Neural Networks

Authors: Shixiang Gu, Sergey Levine, Ilya Sutskever, Andriy Mnih

ICLR 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 EXPERIMENTS We compare the LR, ST, and 1/2 estimators with the Mu Prop estimator on tasks that use a diverse set of network architectures.
Researcher Affiliation Collaboration Shixiang Gu1 2, Sergey Levine3, Ilya Sutskever3, and Andriy Mnih4 1 University of Cambridge 2 MPI for Intelligent Systems, T ubingen, Germany 3 Google Brain 4 Google Deep Mind
Pseudocode Yes Algorithm 1 Compute Mu Prop Gradient Estimator
Open Source Code No The paper does not provide any specific links to source code or explicitly state that the code is publicly available.
Open Datasets Yes For MNIST, the output pixels are binarized using the same protocol as in (Raiko et al., 2015). Given an input x, an output y, and stochastic hidden variables h, the objective is to maximize Ehi pθ(h|x) log 1 m Pm i=1 pθ(y|hi) , an importance-sampled estimate of the likelihood objective (Raiko et al., 2015; Burda et al., 2015). We applied the models to the binarized MNIST dataset, which consists of 28 28 images of hand-written digits, and is commonly used for evaluating generative models.
Dataset Splits No The paper mentions 'm = 100 for validation and testing' which refers to the number of Monte Carlo samples used, not a specific data split percentage or count for a validation dataset. It also mentions 'The learning rate was selected from {0.003, 0.001, .., 0.000003}, and the best test score is reported,' implying a validation process was used to select hyperparameters, but without specifying the data split for that validation set.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., 'Python 3.x, TensorFlow 2.x, PyTorch 1.x'). It mentions an 'automatic differentiation library' but without specifying its name or version.
Experiment Setup Yes For MNIST, a fixed learning rate is chosen from {0.003, 0.001, .., 0.00003}, and the best test result is reported for each method. For the TFD dataset, the learning rate is chosen from the same list, but each learning rate is 10 times smaller. We used a momentum of 0.9 and minibatches of size 100.