Posterior Matching for Arbitrary Conditioning

Authors: Ryan Strauss, Junier B Oliva

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct several experiments in which we apply Posterior Matching to various types of VAEs for a myriad of different tasks, including image inpainting, tabular arbitrary conditional density estimation, partially observed clustering, and active feature acquisition.In order to empirically test Posterior Matching, we apply it to a variety of VAEs aimed at different tasks. We find that our models are able to match or surpass the performance of previous specialized VAE methods. All experiments were conducted using JAX [5] and the Deep Mind JAX Ecosystem [1].
Researcher Affiliation Academia Ryan R. Strauss Department of Computer Science UNC at Chapel Hill rrs@cs.unc.edu Junier B. Oliva Department of Computer Science UNC at Chapel Hill joliva@cs.unc.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/lupalab/posterior-matching.
Open Datasets Yes We train a convolutional VAE with Posterior Matching on the MNIST dataset.We train VQ-VAEs with Posterior Matching for the MNIST, OMNIGLOT, and CELEBA datasets.We evaluate Posterior Matching on real-valued tabular data, specifically the benchmark UCI repository datasets from Papamakarios et al. [31].
Dataset Splits No The paper mentions 'training data' and 'test set' but does not explicitly provide specific percentages, sample counts, or a clear methodology for training/validation/test dataset splits.
Hardware Specification Yes We would like to thank Google s TPU Research Cloud program for providing free access to TPUs.
Software Dependencies No All experiments were conducted using JAX [5] and the Deep Mind JAX Ecosystem [1].
Experiment Setup No More architecture and training details can be found in the Appendix.