Marginal Inference in Continuous Markov Random Fields Using Mixtures

Authors: Yuanzhen Guo, Hao Xiong, Nicholas Ruozzi7834-7841

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide support for our claimed advantages from both a theoretical and a practical perspective: We apply our method to a variety of problems arising from real and synthetic data sets, in each case demonstrating the superior performance of our approach for the marginal inference task.
Researcher Affiliation Academia Yuanzhen Guo, Hao Xiong, Nicholas Ruozzi University of Texas at Dallas 800 W. Campbell Road Richardson, TX 75080 {yuanzhen.guo, hao.xiong, nicholas.ruozzi}@utdallas.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes For this set of experiments, we selected a variety of data sets from the UCI Machine Learning Repository (Dheeru and Karra Taniskidou 2017) with between 4 and 30 variables.
Dataset Splits No The paper describes model evaluation and approximation performance but does not specify explicit training, validation, and test dataset splits for model training in the traditional sense. It evaluates inference methods on pre-constructed graphical models.
Hardware Specification Yes We applied our method directly on the pixel level with L = 1 and KQ = 11 near the zero temperature limit with a GPU implementation on an NVIDIA Tesla V100.
Software Dependencies No We implemented our approach that approximates the beliefs as independent Gaussian mixtures, dubbed QBethe, using standard projected gradient ascent with a diminishing step size rule in MATLAB without parallelization. We compare against the Gaussian EP, EPBP, and PBP methods (also implemented in MATLAB). No specific version numbers for MATLAB or any other libraries are provided.
Experiment Setup Yes For these experiments, QBethe was run from a random intialization with KQ = 4 quadrature points and L = 5 mixture components. PBP and EPBP were run with 20 particles to ensure that all three methods have roughly the same per iteration complexity and use the same number of points in the integral approximations. [...] The number of particles for the sampling methods was set to 100. QBethe was run with L = 1 and three quadrature points.