Hierarchical Implicit Models and Likelihood-Free Variational Inference
Authors: Dustin Tran, Rajesh Ranganath, David Blei
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate diverse applications: a large-scale physical simulator for predator-prey populations in ecology; a Bayesian generative adversarial network for discrete data; and a deep implicit model for text generation. |
| Researcher Affiliation | Academia | Dustin Tran Columbia University Rajesh Ranganath Princeton University David M. Blei Columbia University |
| Pseudocode | Yes | Algorithm 1: Likelihood-free variational inference (LFVI) |
| Open Source Code | No | The paper states that the algorithm is available in Edward [53], which is a probabilistic programming language, but does not provide specific open-source code for the methodology implemented in this paper. |
| Open Datasets | Yes | Table 1: Classification accuracy of Bayesian GAN and Bayesian neural networks across small to medium-size data sets. Crabs Pima Covertype MNIST. |
| Dataset Splits | No | The paper mentions using datasets like MNIST and Lotka-Volterra simulations but does not specify the train/validation/test splits or a methodology for them. |
| Hardware Specification | No | The paper does not provide specific hardware details (like GPU/CPU models or types) used for running its experiments. |
| Software Dependencies | No | The paper mentions the use of Edward, a probabilistic programming language, but does not specify its version number or any other software dependencies with specific versions. |
| Experiment Setup | Yes | We initialize parameters from a standard normal and apply gradient descent with ADAM. g( | θ) is a 2-layer multilayer perception with Re LU activations, batch normalization, and is parameterized by weights and biases θ. We place normal priors, θ N(0, 1). |