Lifted Hybrid Variational Inference

Authors: Yuqiao Chen, Yibo Yang, Sriraam Natarajan, Nicholas Ruozzi

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We investigate the performance of the proposed lifted variational inference approach on a variety of both real and synthetic models. We compare the performance of our variational approach using different entropy approximations...with message-passing algorithms...
Researcher Affiliation Academia Yuqiao Chen 1 , Yibo Yang 2 , Sriraam Natarajan1 and Nicholas Ruozzi1 1 The University of Texas at Dallas 2 University of California Irvine
Pseudocode Yes Algorithm 1 Coarse-to-Fine Lifted VI
Open Source Code Yes Source code is available on https://github.com/leodd/Lifted-Hybrid-Variational-Inference.
Open Datasets Yes We construct a Toy Hybrid MLN for a position domain: The Paper Popularity HMLN domain is determined by the following formulas. The Robot Mapping HMLN domain contains 3 discrete relational variables... We performed approximate inference on a RGM with the recession domain from [Cseke and Heskes, 2011]. We use extracted groundwater level data from the Republican River Compact Association model [Mc Kusick, 2003].
Dataset Splits No The paper describes generating random evidence for models and randomly choosing variables for testing, but it does not specify any explicit train/validation/test dataset splits, percentages, or cross-validation methods.
Hardware Specification Yes All timing results, unless otherwise noted, were performed on a single core of a 2.2 GHz Intel Core i7-8750H CPU with 16GB memory.
Software Dependencies No The paper mentions the use of 'Tensorflow' but does not specify its version number or the versions of any other software dependencies, making replication difficult.
Experiment Setup Yes All three methods used the Adam optimizer with learning rate 0.2, β1 = 0.9, β2 = 0.999.