Projected Latent Markov Chain Monte Carlo: Conditional Sampling of Normalizing Flows
Authors: Chris Cannella, Mohammadreza Soltani, Vahid Tarokh
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experimental tests applying normalizing flows to missing data tasks for a variety of data sets, we demonstrate the efficacy of PL-MCMC for conditional sampling from normalizing flows. |
| Researcher Affiliation | Academia | Chris Cannella, Mohammadreza Soltani & Vahid Tarokh Department of Electrical and Computer Engineering Duke University |
| Pseudocode | Yes | Algorithm 1: PL-MCMC Metropolis-Hastings Update; Algorithm 2: Monte Carlo Expectation Maximization Training of Normalizing Flow |
| Open Source Code | No | The paper mentions utilizing publicly available pre-trained models and modifying existing implementations, but it does not state that the authors themselves are releasing the code for the PL-MCMC method described in the paper. |
| Open Datasets | Yes | CIFAR-10 (Krizhevsky et al., 2009); Celeb A (Liu et al., 2015); MNIST (Le Cun et al., 1998); UCI datasets (Bache & Lichman, 2013) |
| Dataset Splits | Yes | The standard training and test set split for the CIFAR-10 dataset is 50, 000 and 10, 000 images, respectively. The standard training and test set split for the MNIST dataset is 60, 000 and 10, 000 images, respectively. |
| Hardware Specification | No | The paper does not specify the exact hardware (e.g., GPU models, CPU types, memory) used for running the experiments. It only details training parameters and software components. |
| Software Dependencies | No | The paper mentions optimizers like Adamax and RMSprop, and names implementations used (e.g., "Our implementation is a modification of that by Mu (2019)" for NICE, and using implementations by Li (2019) for Mis GAN and Mattei (2019) for MIWAE), but it does not provide specific version numbers for programming languages (e.g., Python), deep learning frameworks (e.g., PyTorch, TensorFlow), or other key software libraries. |
| Experiment Setup | Yes | The model was reportedly trained for a total of 1, 500 epochs using Adamax with a learning rate of 5 10 4 and a batchsize of 64. The normalizing flow is trained for 1000 epochs over the standard 60, 000 element MNIST training set using RMSprop with a learning rate of 1 10 5 and a momentum of 0.9 and a batch size of 200. |