Faithful Inversion of Generative Models for Effective Amortized Inference

Authors: Stefan Webb, Adam Golinski, Rob Zinkov, Siddharth N, Tom Rainforth, Yee Whye Teh, Frank Wood

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We prove the correctness of our approach and empirically show that the resulting minimally faithful inverses lead to better inference amortization than existing heuristic approaches. 3 Experiments We now consider the empirical impact of using Na MI compared with previous approaches.
Researcher Affiliation Academia Stefan Webb University of Oxford Adam Goli nski University of Oxford Robert Zinkov UBC N. Siddharth University of Oxford Tom Rainforth University of Oxford Yee Whye Teh University of Oxford Frank Wood UBC
Pseudocode Yes Algorithm 1 Na MI Graph Inversion
Open Source Code Yes Low-level details on the experimental setups can be found in Appendix D and an implementation at https://git.io/fx VQu.
Open Datasets Yes We learn a relaxed Bernoulli VAE with 30 latent variables on MNIST
Dataset Splits Yes Figures 6a and 6b show an estimate of KL(pθ(z|x)||qψ(z|x)) using the train and test sets respectively.
Hardware Specification No The paper does not specify any particular GPU models, CPU models, or other hardware used for the experiments.
Software Dependencies No While the paper mentions "MADE" or "neural networks," it does not specify software versions (e.g., PyTorch 1.x, TensorFlow 2.x, Python 3.x).
Experiment Setup Yes compared after 1000 epochs of learning the: (a) negative ELBO, and (b) negative AIS estimates, varying inference network factorizations and capacities (total number of parameters); Results are given in Figure 6 for depth d = 5 averaging over 10 runs. We hold the neural network capacities constant across methods and average over 10 runs.