Variational Message Passing with Structured Inference Networks

Authors: Wu Lin, Nicolas Hubacher, Mohammad Emtiyaz Khan

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The main goal of our experiments is to show that our SAN algorithm gives similar results to the method of Johnson et al. (2016). For this reason, we apply our algorithm to the two examples considered in Johnson et al. (2016), namely the latent GMM and latent LDS (see Fig. 1). In this section we discuss results for latent GMM.
Researcher Affiliation Academia Wu Lin , Nicolas Hubacher , Mohammad Emtiyaz Khan RIKEN Center for Adavanced Intelligene Project, Tokyo, Japan wlin2018@cs.ubc.ca, nicolas.hubacher@outlook.com, emtiyaz@gmail.com and Wu Lin is now at the University of British Columbia, Vancouver, Canada.
Pseudocode Yes Algorithm 1 Structured, Amortized, and Natural-gradient (SAN) Variational Inference
Open Source Code Yes The code to reproduce our results is available at https://github.com/emtiyaz/vmp-for-svae/.
Open Datasets Yes We use two datasets. The first dataset is the synthetic two-dimensional Pinwheel dataset (N = 5000 and D = 2) used in (Johnson et al., 2016). The second dataset is the Auto dataset (N = 392 and D = 6, available in the UCI repository) which contains information about cars.
Dataset Splits Yes For both datasets we use 70% data for training and the rest for testing. For all methods, we tune the step-sizes, the number of mixture components, and the latent dimensionality on a validation set.
Hardware Specification No The paper mentions 'RAIDEN computing system at RIKEN AIP center, which we used for our experiments' but does not provide specific hardware details such as GPU or CPU models, or memory specifications.
Software Dependencies No The paper does not specify version numbers for any software dependencies, libraries, or frameworks used in the implementation or experimentation.
Experiment Setup Yes DNNs in all models consist of two layers with 50 hidden units and an output layer of dimensionality 6 and 2 for the Auto and Pinwheel datasets, respectively. and For all methods, we tune the step-sizes, the number of mixture components, and the latent dimensionality on a validation set.