Variational inference via Wasserstein gradient flows

Authors: Marc Lambert, Sinho Chewi, Francis Bach, Silvère Bonnabel, Philippe Rigollet

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Moreover, in Section 4.1, we apply numerical integration based on cubature rules for Gaussian integrals to the system of ODEs (4), thus arriving at a fast method with robust empirical performance (details in Sections I and J). We validate the empirical performance of our method with promising experimental results (see Section J).
Researcher Affiliation Academia Marc Lambert DGA, INRIA, Ecole Normale Supérieure, PSL Research University marc.lambert@inria.fr; Sinho Chewi MIT schewi@mit.edu; Francis Bach INRIA, Ecole Normale Supérieure, PSL Research University francis.bach@inria.fr; Silvère Bonnabel MINES Paris PSL, Université de la Nouvelle-Calédonie silvere.bonnabel@minesparis.psl.eu; Philippe Rigollet MIT rigollet@math.mit.edu
Pseudocode Yes Algorithm 1 Bures Wasserstein SGD
Open Source Code Yes Code for the experiments is available at https://github.com/marc-h-lambert/W-VI.
Open Datasets No We have tested our method on a bimodal distribution and on a posterior distribution arising from a logistic regression problem. The paper does not provide concrete access information (specific link, DOI, repository name, formal citation with authors/year, or reference to established benchmark datasets) for a publicly available or open dataset.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup No The paper describes algorithmic components and theoretical conditions, but does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations used in the empirical evaluations.