Scaling-up Importance Sampling for Markov Logic Networks

Authors: Deepak Venugopal, Vibhav G Gogate

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on several MLNs clearly demonstrate the promise of our approach.
Researcher Affiliation Academia Deepak Venugopal Department of Computer Science University of Texas at Dallas dxv021000@utdallas.edu Vibhav Gogate Department of Computer Science University of Texas at Dallas vgogate@hlt.utdallas.edu
Pseudocode Yes Algorithm 1: Compute-Marginals
Open Source Code No The paper does not provide concrete access to source code for the methodology described. It references the Alchemy system, which is a third-party tool.
Open Datasets Yes Our test MLNs include Smokers and HMM (with few states) from the Alchemy website [10] and two additional MLNs, Relation (R(x, y) S(y, z)), Log Req (randomly generated formulas with singletons). [10] S. Kok, M. Sumner, M. Richardson, P. Singla, H. Poon, D. Lowd, J. Wang, and P. Domingos. The Alchemy System for Statistical Relational AI. Technical report, Department of Computer Science and Engineering, University of Washington, Seattle, WA, 2008. http://alchemy.cs.washington.edu.
Dataset Splits No The paper mentions random setting of groundings as true or false (25% each) for MLNs, but it does not specify explicit training, validation, or test dataset splits in terms of percentages or counts required for reproduction.
Hardware Specification Yes We ran all experiments on a quad-core, 6GB RAM, Ubuntu laptop.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes For clustering, we used the scheme in [19] with KMeans++ as the clustering method. For Gibbs sampling, we set the thinning parameter to 5 and use a burn-in of 50 samples.