Rényi Divergence Variational Inference

Authors: Yingzhen Li, Richard E. Turner

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on Bayesian neural networks and variational auto-encoders demonstrate the wide applicability of the VR bound.
Researcher Affiliation Academia Yingzhen Li University of Cambridge Cambridge, CB2 1PZ, UK yl494@cam.ac.uk Richard E. Turner University of Cambridge Cambridge, CB2 1PZ, UK ret26@cam.ac.uk
Pseudocode Yes Algorithm 1 One gradient step for VR-α/VR-max with single backward pass. Here ˆw(ϵk; x) shorthands ˆw0,k(ϵk; φ, x) in the main text.
Open Source Code Yes The implementation of all the experiments in Python is released at https://github.com/Yingzhen Li/VRbound.
Open Datasets Yes The datasets are collected from the UCI dataset repository.1 (Footnote 1: 'http://archive.ics.uci.edu/ml/datasets.html') and Four datasets are considered: Frey Face (with 10-fold cross validation), Caltech 101 Silhouettes, MNIST and OMNIGLOT.
Dataset Splits Yes We summarise the test negative log-likelihood (LL) and RMSE with standard error (across different random splits except for Year) for selected datasets in Figure 4, where the full results are provided in the appendix. and Four datasets are considered: Frey Face (with 10-fold cross validation), Caltech 101 Silhouettes, MNIST and OMNIGLOT.
Hardware Specification Yes Indeed our numpy implementation of VR-max achieves up to 3 times speed-up compared to IWAE (9.7s vs. 29.0s per epoch, tested on Frey Face data with K = 50 and batch size M = 100, CPU info: Intel Core i7-4930K CPU @ 3.40GHz).
Software Dependencies No The paper mentions 'Python' and the 'ADAM optimizer' but does not specify their version numbers or any other software dependencies with versions.
Experiment Setup No The paper states: 'the detailed experimental set-up (batch size, learning rate, etc.) can be found in the appendix', implying these details are not present in the main body of the paper.