Variational Inference with Gaussian Score Matching
Authors: Chirag Modi, Robert Gower, Charles Margossian, Yuling Yao, David Blei, Lawrence Saul
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically compared GSM-VI to reparameterization BBVI on several classes of models, and with both synthetic and real-world data. We evaluate how well GSM-VI can approximate a variety of target posterior distributions. |
| Researcher Affiliation | Collaboration | Chirag Modi Center for Computational Astrophysics, Center for Computational Mathematics, Flatiron Institute, New York cmodi@flatironinstitute.org Charles C. Margossian Center for Computational Mathematics, Flatiron Institute, New York cmargossian@flatironinstitute.org Yuling Yao Center for Computational Mathematics, Flatiron Institute, New York yyao@flatironinstitute.org Robert M. Gower Center for Computational Mathematics, Flatiron Institute, New York rgower@flatironinstitute.org David M. Blei Department of Computer Science, Statistics, Columbia University, New York david.blei@columbia.edu Lawrence K. Saul Center for Computational Mathematics, Flatiron Institute, New York lsaul@flatironinstitute.org |
| Pseudocode | Yes | Algorithm 1: Gaussian Score Matching VI; Algorithm 2: Black-box variational inference |
| Open Source Code | Yes | We provide a Python implementation of GSM-VI algorithm at https://github.com/modichirag/GSM-VI. |
| Open Datasets | Yes | a collection of real-world Bayesian inference problems from the posterior DB database of datasets and models. M. Magnusson, P. Bürkner, and A. Vehtari. posteriordb: a set of posteriors for Bayesian inference and probabilistic programming, November 2022. URL https://github.com/ stan-dev/posteriordb. |
| Dataset Splits | No | The paper uses synthetic models where the true distribution is known, and for real-world models, it refers to the posteriordb. However, it does not provide explicit details on training, validation, and test dataset splits with percentages or counts for reproducibility. |
| Hardware Specification | No | The paper mentions running times but does not specify any hardware details like GPU/CPU models or specific machine configurations used for experiments. |
| Software Dependencies | No | The paper mentions 'Python implementation' and 'Jax', 'ADAM optimizer [19]', and 'bridgestan [6, 30]' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | The only free parameter in GSM-VI is the batch size B. ... In all studies of this section we report results for B = 2 and show that it is a good default baseline. We use the same batch size for BBVI. ... Unless specified otherwise, we initialize the variational approximation as a Gaussian distribution with zero mean and identity covariance matrix. ... We use the ADAM optimizer [19] with default settings but vary the learning rate between 10⁻¹ and 10⁻³. |