Scaling-Up Inference in Markov Logic
Authors: Deepak Venugopal
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our evaluation demonstrated that LBG is far superior to propositional approaches in terms of scalability and convergence. In lifted importance sampling (Gogate, Jha, and Venugopal 2012), we draw lifted samples from a proposal distribution instead of sampling individual groundings. On three Bio NLP datasets, our system was better or on par with the best systems and outperformed all previous MLN-based systems. |
| Researcher Affiliation | Academia | Deepak Venugopal Department of Computer Science The University of Texas at Dallas dxv021000@utdallas.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code, such as a repository link or an explicit code release statement for the methodology described. |
| Open Datasets | Yes | On three Bio NLP datasets, our system was better or on par with the best systems and outperformed all previous MLN-based systems. In our recent paper (Venugopal et al. 2014), we developed a joint inference based event extraction system using MLNs. |
| Dataset Splits | No | The paper mentions using "three Bio NLP datasets" but does not provide specific dataset split information (e.g., exact percentages, sample counts, or cross-validation details) needed to reproduce the data partitioning for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | No | The paper does not contain specific experimental setup details, concrete hyperparameter values, or training configurations in the main text. |