Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations

Authors: Wu Lin, Mohammad Emtiyaz Khan, Mark Schmidt

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical results demonstrate a faster convergence of our natural-gradient method compared to black-box gradient-based methods.
Researcher Affiliation Academia 1University of British Columbia, Vancouver, Canada. 2RIKEN Center for Advanced Intelligence Project, Tokyo, Japan.
Pseudocode No The paper provides equations for updates but does not include structured pseudocode or an algorithm block.
Open Source Code Yes The code is available at: https://github.com/yorkerlin/VB-Mix EF.
Open Datasets Yes The Breast Cancer dataset has N = 683, d = 10 with 341 chosen for training... The Sonar dataset has N = 208, d = 6 with 100 chosen for training... We use the UCI dataset covtype-binary-scale with d = 54, N = 581, 012 with 464, 809 chosen for training...
Dataset Splits No The paper mentions the number of samples chosen for training (e.g., '341 chosen for training' out of 683 for Breast Cancer) but does not specify the explicit split percentages or counts for training, validation, and test sets. It only implies the remaining data is for validation/testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) needed to replicate the experiments.
Experiment Setup Yes We initialize πc = 1 K and Σc = 100I. Each element of µc is randomly initialized by Gaussian noise with mean 0 and variance 100. We use 10 Monte Carlo samples to compute the gradients. ... We use 10 Monte Carlo samples for training, and M denotes the mini-batch size. ... We use 10 Monte Carlo (MC) samples and mini-batch size of 32.