Importance Weighted Hierarchical Variational Inference

Authors: Artem Sobolev, Dmitry P. Vetrov

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically demonstrate superior performance of the proposed method in a set of experiments.
Researcher Affiliation Collaboration Artem Sobolev Samsung AI Center Moscow, Russia asobolev@bayesgroup.ru Dmitry Vetrov Samsung AI Center Moscow, Russia NRU HSE , Moscow, Russia Samsung-HSE Laboratory, National Research University Higher School of Economics
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/artsobolev/IWHVI
Open Datasets Yes We report results on two datasets: MNIST (Le Cun et al., 1998) and OMNIGLOT (Lake et al., 2015).
Dataset Splits Yes For MNIST we follow the setup by Mescheder et al. (2017), and for OMNIGLOT we follow the standard setup (Burda et al., 2015).
Hardware Specification No No specific hardware details (like GPU/CPU models, memory amounts, or detailed computer specifications) used for running experiments are mentioned in the paper.
Software Dependencies No The paper mentions software like TensorFlow, Matplotlib, NumPy, and SciPy in references or generally, but does not provide specific version numbers for these or other key dependencies used in their experiments.
Experiment Setup Yes During training we used the proposed bound eq. (4) with standard prior p(z) = N(z | 0, 1) with increasing number K: we used K = 0 for the first 250 epochs, K = 5 for the next 250 epochs, and K = 25 for the next 500 epochs, and K = 50 from then on (90% of training)... Regarding the number of samples z, we used M = 1 throughout training... We train our models with Adam (Kingma and Ba, 2014) for 1000 epochs with batch size of 100 and initial learning rate of 1e-4.