Variance Reduction in Black-box Variational Inference by Adaptive Importance Sampling

Authors: Ximing Li, Changchun Li, Jinjin Chi, Jihong Ouyang

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on two Bayesian models show that the MMP can effectively reduce variance in black-box learning, and perform better than baseline inference algorithms. 5 Empirical Study We have described an adaptive proposal, namely MMP, for the importance sampling step in O-BBVI. In this section, we evaluate the performance O-BBVI with MMPs (abbr. OBBVI-MMP) on two Bayesian models, including Mixture of Gaussians and Bayesian logistic regression [Jaakkola and Jordan, 1997]. We choose three black-box inference algorithms as baselines, including BBVI [Ranganath et al., 2014], O-BBVI [Ruiz et al., 2016b] and AEVB [Kingma and Welling, 2014].
Researcher Affiliation Academia Ximing Li, Changchun Li, Jinjin Chi, Jihong Ouyang College of Computer Science and Technology, Jilin University, China Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, China liximing86@gmail.com
Pseudocode Yes Algorithm 1 O-BBVI with MMPs
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to code repositories for the described methodology.
Open Datasets Yes We use a subset of the MNIST data set that includes all 14,283 examples from the digit classes 2 and 7, each with 784 pixels. We then evaluate our method across two real-world data sets, including a UCI data set Vote1 and an object data set COIL20. 1https://archive.ics.uci.edu/ml/datasets.html
Dataset Splits No The paper mentions training and testing splits for MNIST data ('The standard training set contains 12,223 examples and the remaining 2,060 examples are used for testing.') but does not specify a separate validation dataset split.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments (e.g., CPU/GPU models, memory specifications).
Software Dependencies No The paper does not list specific software dependencies with version numbers needed to replicate the experiments.
Experiment Setup Yes The number of samples S are set to 32 and 16 for baseline algorithms and our algorithm, respectively. Specially, the sample number M used in MMP estimation is set to 8. For our O-BBVI-MMP, the iteration window size P is set to 8.