Graphical model inference: Sequential Monte Carlo meets deterministic approximations

Authors: Fredrik Lindsten, Jouni Helske, Matti Vihola

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper we present a way of bridging the gap between deterministic and stochastic inference. Specifically, we suggest an efficient sequential Monte Carlo (SMC) algorithm for PGMs which can leverage the output from deterministic inference methods. While generally applicable, we show explicitly how this can be done with loopy belief propagation, expectation propagation, and Laplace approximations. The resulting algorithm can be viewed as a post-correction of the biases associated with these methods and, indeed, numerical results show clear improvements over the baseline deterministic methods as well as over plain SMC.
Researcher Affiliation Academia Fredrik Lindsten Department of Information Technology Uppsala University Uppsala, Sweden fredrik.lindsten@it.uu.se Jouni Helske Department of Science and Technology Linköping University Norrköping, Sweden jouni.helske@liu.se Matti Vihola Department of Mathematics and Statistics University of Jyväskylä Jyväskylä, Finland matti.s.vihola@jyu.fi
Pseudocode Yes Algorithm 1 Sequential Monte Carlo (all steps are for i = 1, . . . , N)
Open Source Code No The paper does not provide any concrete access information (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described.
Open Datasets Yes First we consider a synthetic toy model with 4 topics and 10 words, for which the exact likelihood can be computed. ... In the middle and right panels of Figure 2 we show results for two real datasets, Pub Med Central abstracts and 20 newsgroups, respectively (see [37]).
Dataset Splits No The paper mentions "held-out documents" but does not provide specific details on training, validation, and test splits (e.g., percentages, sample counts, or explicit splitting methodology).
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions the "R package INLA [21]" but does not provide a specific version number for this or any other software dependency.
Experiment Setup Yes Each algorithm is run 50 times for varying number of particles. ... N, the number of particles, from 64 up to 1024. For both SMC approaches, we used adaptive resampling based on effective sample size with threshold of N/2.