Improved Variational Inference with Inverse Autoregressive Flow
Authors: Durk P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. |
| Researcher Affiliation | Collaboration | Diederik P. Kingma dpkingma@openai.com Tim Salimans tim@openai.com Rafal Jozefowicz rafal@openai.com Xi Chen peter@openai.com Ilya Sutskever ilya@openai.com Max Welling M.Welling@uva.nl University of Amsterdam, University of California Irvine, and the Canadian Institute for Advanced Research (CIFAR). |
| Pseudocode | Yes | Algorithm 1: Pseudo-code of an approximate posterior with Inverse Autoregressive Flow (IAF) |
| Open Source Code | Yes | Code for reproducing key empirical results is available online3. |
| Open Datasets | Yes | Table 1 shows results on MNIST for these types of posteriors. We also evaluated IAF on the CIFAR-10 dataset of natural images. |
| Dataset Splits | No | The paper mentions 'dynamically sampled binarized MNIST version used in previous publications' and 'Hugo Larochelle s statically binarized MNIST', but does not provide specific split percentages or sample counts for training, validation, and test sets. It relies on previously established datasets without detailing the splits within the paper. |
| Hardware Specification | Yes | Sampling took about 0.05 seconds/image with the Res Net VAE model, versus 52.0 seconds/image with the Pixel CNN model, on a NVIDIA Titan X GPU. |
| Software Dependencies | No | The paper does not provide specific software details with version numbers (e.g., Python 3.x, PyTorch 1.x) that would be needed to replicate the experiment. |
| Experiment Setup | Yes | Please see appendix C for details on the architectures of the generative model and inference models. |