Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives
Authors: George Tucker, Dieterich Lawson, Shixiang Gu, Chris J. Maddison
ICLR 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Then, we evaluate DRe G estimators on MNIST generative modeling, Omniglot generative modeling, and MNIST structured prediction tasks. In all cases, we demonstrate substantial unbiased variance reduction, which translates to improved performance over the original estimators. |
| Researcher Affiliation | Collaboration | George Tucker Google Brain EMAIL Dieterich Lawson New York University EMAIL Shixiang Gu Google Brain EMAIL Chris J. Maddison University of Oxford, Deep Mind EMAIL |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Implementation of DRe G estimators and code to reproduce experiments: sites.google.com/view/ dregs. |
| Open Datasets | Yes | Training generative models of the binarized MNIST digits dataset is a standard benchmark task for latent variable models. ... Next, we performed the analogous experiment with the dynamically binarized Omniglot dataset |
| Dataset Splits | Yes | We used the standard split of MNIST into train, validation, and test sets. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for running experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper does not provide specific software dependency versions (e.g., programming language or library versions) used in the experiments. |
| Experiment Setup | Yes | The generative model used 50 Gaussian latent variables with an isotropic prior and passed z through two deterministic layers of 200 tanh units to parameterize factorized Bernoulli outputs. The inference network passed x through two deterministic layers of 200 tanh units to parameterize a factorized Gaussian distribution over z. ... All methods used K = 64. |