Efficient Inference for Untied MLNs

Authors: Somdeb Sarkhel, Deepak Venugopal, Nicholas Ruozzi, Vibhav Gogate

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our proposed exact and approximate encodings on several MLN benchmarks and compare their performance to [Venugopal et al., 2015]. Our results clearly demonstrate that our new encodings, both exact and approximate, substantially improve the scalability, convergence, and accuracy of Gibbs sampling and Max Walk SAT.
Researcher Affiliation Collaboration 1Adobe Research, San Jose, CA 2Department of Computer Science, The University of Memphis 3Department of Computer Science, The University of Texas at Dallas
Pseudocode Yes Algorithm 1 Cluster-Param (Parameter tensor θX+, Cluster sizes k X+) and Algorithm 2 Partition-Network (MLN function f, Set of partitions {P(l)})
Open Source Code No The paper states: "implemented our system on top of the publicly available Magician system [Venugopal et al., 2016] that uses #SG." and provides a link in the references: "https://github.com/dvngp/CD-Learn." However, this link is for the third-party Magician system, not for the specific code developed by the authors for this paper's methodology (their novel encodings and clustering approach).
Open Datasets Yes We conducted our experiments on the following three datasets: (i) Student MLN having the formula Student(x, +p) Publish(x, z) Cited(z, +u) (ii) Web KB MLN from the Alchemy web page (iii) Citation Information-Extraction (IE) MLN form the Alchemy web page
Dataset Splits No The paper does not explicitly provide specific training/validation/test dataset splits or mention a validation set. It refers to general "datasets" for experimentation.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running the experiments.
Software Dependencies No The paper mentions using the "Magician system [Venugopal et al., 2016]" but does not specify its version number or any other software dependencies with their respective versions.
Experiment Setup Yes We evaluate the graphical model encodings proposed in our paper by using them within two inference algorithms: (1) Gibbs sampling to compute marginal probabilities and (2) Max Walk SAT for MAP inference. ... Each solver was given 200 seconds for each dataset. ... We set up five Gibbs samplers from random initialization and measure within chain and across chain variances for the marginal probabilities for 1000 randomly chosen ground query atoms.