On Lifted Inference Using Neural Embeddings

Authors: Mohammad Maminur Islam, Somdeb Sarkhel, Deepak Venugopal7916-7923

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct our experiments on three problems, Webpage classification (Webkb), Entity Resolution (ER) and Protein Interaction (Protein)... Our results clearly show that our approach is more scalable and accurate for several benchmark inference problems.
Researcher Affiliation Collaboration Mohammad Maminur Islam,1 Somdeb Sarkhel,2 Deepak Venugopal1 1The University of Memphis 2Adobe Research mislam3@memphis.edu, sarkhel@adobe.com, dvngopal@memphis.edu.
Pseudocode Yes Algorithm 1: Obj2vec Lifting
Open Source Code No The paper states, "We implemented Obj2vec using the Gensim package ( ˇReh uˇrek and Sojka 2010)" and refers to the Magician system's code (Venugopal, Sarkhel, and Gogate 2016) which is available at "https://github.com/dvngp/CD-Learn", but does not provide concrete access to the source code for their own Obj2Vec methodology.
Open Datasets Yes We conducted our experiments on three problems, Webpage classification (Webkb), Entity Resolution (ER) and Protein Interaction (Protein), all of which are publicly available in Alchemy (Kok et al. 2006).
Dataset Splits No The paper states, "We then used around 10% of the benchmark data as test data," but does not provide specific details for training, validation, or other explicit dataset splits beyond this.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions software like "Gensim package", "Tuffy (Niu et al. 2011)", "Magician (Venugopal, Sarkhel, and Gogate 2016)", and "NIMFA library in python" but does not provide specific version numbers for these dependencies, which are required for reproducibility.
Experiment Setup Yes For Obj2vec, we set the hidden layer to have 300 neurons (a typical size recommended for word embeddings (Mikolov et al. 2013))... For the sampler, we set p = 0.01 to insert random walks into the sampling... For NE, we control this by setting the α value during sampling to achieve the required CR.