Quantum Embedding of Knowledge for Reasoning

Authors: Dinesh Garg, Shajith Ikbal, Santosh K. Srivastava, Harit Vishwakarma, Hima Karanam, L Venkata Subramaniam

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated the performance of E2R on two different kinds of tasks (i) link prediction task, and (ii) reasoning task. For link prediction, we chose FB15K and WN18 datasets because they are standard in the literature [5, 6, 24]... The experimental results illustrate the effectiveness of E2R relative to standard baselines.
Researcher Affiliation Collaboration Dinesh Garg1 , Shajith Ikbal1 , Santosh K. Srivastava1, Harit Vishwakarma2 , Hima Karanam1, L Venkata Subramaniam1 1IBM Research AI, India 2Dept. of Computer Sciences, University of Wisconsin-Madison, USA
Pseudocode No The paper describes its model and loss functions mathematically, but it does not include a dedicated pseudocode or algorithm block.
Open Source Code No The paper mentions 'We used Open KE (https://github.com/thunlp/Open KE) implementation of these approaches for our evaluation' for baselines, but does not provide any statement or link for the open-sourcing of their own proposed E2R code.
Open Datasets Yes For link prediction, we chose FB15K and WN18 datasets because they are standard in the literature [5, 6, 24]... To evaluate the reasoning capabilities, we chose LUBM (Lehigh University Benchmark) dataset (http://swat.cse.lehigh.edu/projects/lubm/)
Dataset Splits No The paper states 'The train and test sets of these datasets are respectively used for training and testing our proposed model.' and 'Tuning of the hyper-parameters for the baseline approaches was performed on the test set for FB15K and WN18 datasets but on the training set for LUBM1U. For E2R, the tuning was always done on the training set.' However, it does not explicitly define a separate validation split or its size/proportion.
Hardware Specification Yes Our experiments were performed on a Tesla K80 GPU machine.
Software Dependencies No The paper states 'We implemented E2R model using Py Torch. We used SGD (Stochastic Gradient Descent) with ADAM optimizer [25]', but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes In all our experiments we used d = 100 for E2R model... We used 3 different negative entities per positive entity in our experimental setup.