Approximate Lifting Techniques for Belief Propagation
Authors: Parag Singla, Aniruddh Nath, Pedro Domingos
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive evaluation on six domains demonstrates great efficiency gains with only minor (or no) loss in accuracy. We compared the performance of lifted BP (exact and approximate) with the ground version on five real domains and one artificial domain. |
| Researcher Affiliation | Academia | Parag Singla Department of Computer Science and Engineering Indian Institute of Technology Delhi Hauz Khas, New Delhi, 110016, INDIA. parags@cse.iitd.ac.in Aniruddh Nath and Pedro Domingos Department of Computer Science and Engineering University of Washington Seattle, WA 98195-2350, U.S.A. {nath, pedrod}@cs.washington.edu |
| Pseudocode | Yes | Algorithm 1 LNC(MLN M, constants C, evidence E) and Algorithm 2 Form Hypercubes(Tuple set T) |
| Open Source Code | No | No explicit statement or link providing access to the source code for the methodology described in this paper was found. The paper mentions extending the 'open-source Alchemy system', but this does not constitute releasing their own specific implementation. |
| Open Datasets | Yes | Entity Resolution on Mc Callum s Cora database... Hyperlink Prediction on the Web KB dataset (Craven and Slattery 2001)... Image Denoising using the binary image in Bishop (2006)... Social Network link and label prediction, on the Friends and Smokers MLN from Richardson and Domingos (2006). |
| Dataset Splits | Yes | The data was divided into 5 splits for cross validation. For a randomly chosen 10% of the people, we know (a) whether they smoke or not, and (b) who 10 of their friends are (other friendship relations are still assumed to be unknown). |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models or memory) used for running experiments were provided. |
| Software Dependencies | No | The paper mentions extending the 'open-source Alchemy system' and using 'L-BFGS' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | we ran it for 1000 steps for all algorithms in all experiments. Voted perceptron (with η = 10 5 and a Gaussian prior) was used for training the model. For early stopping, three iterations of lifted network construction were used... For noise-tolerant hypercube construction, we allowed at most one tuple to have a truth value differing from the majority value. |