Reasoning Over Virtual Knowledge Bases With Open Predicate Relations
Authors: Haitian Sun, Patrick Verga, Bhuwan Dhingra, Ruslan Salakhutdinov, William W Cohen
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that OPQL outperforms prior VKB methods on two different KB reasoning tasks and, additionally, can be used as an external memory integrated into a language model (OPQL-LM) leading to improvements on two open-domain question answering tasks. |
| Researcher Affiliation | Collaboration | 1Carnegie Mellon University 2Google Research. |
| Pseudocode | No | The paper describes methods and equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | We use Wikipedia passages with hyperlinks as our pretraining. We use the entire Wikipedia as our text corpus for the open-domain QA tasks. Meta QA (Zhang et al., 2018) is a multi-hop QA dataset... Multi-hop Slot Filling (MSF) (Dhingra et al., 2020) presents a large scale multi-hop reasoning dataset... Web Question SP (Web QSP) (Yih et al., 2015) is an open-domain Question Answering dataset... Complex Web Questions (Complex Web Q) (Talmor & Berant, 2018) extends Web Questions SP to multi-hop questions. |
| Dataset Splits | Yes | We end up with 10K finetuning data for Meta QA and 19K for MSF. Complex Web Questions (Complex Web Q) (Talmor & Berant, 2018) extends Web Questions SP to multi-hop questions. ... (dev) |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., CPU/GPU models, memory). |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers needed to reproduce the experiment. |
| Experiment Setup | No | The paper provides some high-level finetuning details and mentions that parameters are from Verga et al. (2020) but does not provide specific numerical hyperparameters or system-level training settings within the main text. |