Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives

Authors: George Tucker, Dieterich Lawson, Shixiang Gu, Chris J. Maddison

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Then, we evaluate DRe G estimators on MNIST generative modeling, Omniglot generative modeling, and MNIST structured prediction tasks. In all cases, we demonstrate substantial unbiased variance reduction, which translates to improved performance over the original estimators.
Researcher Affiliation Collaboration George Tucker Google Brain gjt@google.com Dieterich Lawson New York University jdl404@nyu.edu Shixiang Gu Google Brain shanegu@google.com Chris J. Maddison University of Oxford, Deep Mind cmaddis@stats.ox.ac.uk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Implementation of DRe G estimators and code to reproduce experiments: sites.google.com/view/ dregs.
Open Datasets Yes Training generative models of the binarized MNIST digits dataset is a standard benchmark task for latent variable models. ... Next, we performed the analogous experiment with the dynamically binarized Omniglot dataset
Dataset Splits Yes We used the standard split of MNIST into train, validation, and test sets.
Hardware Specification No The paper does not provide specific details on the hardware used for running experiments, such as GPU or CPU models.
Software Dependencies No The paper does not provide specific software dependency versions (e.g., programming language or library versions) used in the experiments.
Experiment Setup Yes The generative model used 50 Gaussian latent variables with an isotropic prior and passed z through two deterministic layers of 200 tanh units to parameterize factorized Bernoulli outputs. The inference network passed x through two deterministic layers of 200 tanh units to parameterize a factorized Gaussian distribution over z. ... All methods used K = 64.