Slice Sampling Reparameterization Gradients

Authors: David Zoltowski, Diana Cai, Ryan P. Adams

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the method on synthetic examples and apply it to a variety of applications with reparameterization of unnormalized probability distributions. Here we apply slice sampling reparameterization gradients to several problems, demonstrating the generality of the approach and the potential appeal of reparameterization gradients for unnormalized distributions.
Researcher Affiliation Academia David M. Zoltowski Princeton Neuroscience Institute Princeton University Princeton, NJ 08540 zoltowski@princeton.edu Diana Cai Department of Computer Science Princeton University Princeton, NJ 08540 dcai@cs.princeton.edu Ryan P. Adams Department of Computer Science Princeton University Princeton, NJ 08540 rpa@princeton.edu
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks. The method is described in prose and mathematical equations.
Open Source Code Yes Our implementation is available at https://github.com/PrincetonLIPS/slicereparam
Open Datasets Yes We fit a variational autoencoder (VAE) [31, 45] using the VCD objective to MNIST [34] and Fashion MNIST [58]. (References [34] and [58] point to the MNIST handwritten digit database and Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms, respectively, which are publicly available datasets.)
Dataset Splits No The paper states that the model was "optimized for 400K iterations using ADAM with batch sizes of 100" and refers to "held-out log-likelihood", but it does not explicitly provide specific training/validation/test split percentages, sample counts, or explicit mentions of a validation set, nor does it cite the use of standard predefined splits for these datasets.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper states "We implemented the forward sampling and reverse mode AD in JAX [4]". While JAX is named, a specific version number for JAX or any other software dependencies is not provided, which is required for reproducibility.
Experiment Setup Yes The model was optimized for 400K iterations using ADAM with batch sizes of 100.