Variational Reasoning for Question Answering With Knowledge Graph

Authors: Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander Smola, Le Song

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method achieves state-of-the-art performance on a recent benchmark dataset in the literature. We also derive a series of new benchmark datasets... Extensive experiments show that our method achieves state-of-the-art performance on both singleand multi-hop datasets, demonstrating the capability of multi-hop reasoning.
Researcher Affiliation Collaboration Georgia Institute of Technology yuyu.zhang@cc.gatech.edu... Zornitsa Kozareva Amazon Web Services kozareva@amazon.com... Alexander J. Smola Amazon Web Services smola@amazon.com... Le Song Georgia Institute of Technology lsong@cc.gatech.edu
Pseudocode Yes Algorithm 1 Joint training of VRN
Open Source Code No Our new benchmark dataset collections METAQA are publicly available at https://goo.gl/f3Amc Y. (This refers to the dataset, not the code for the methodology described in the paper).
Open Datasets Yes Thus in this paper, we introduce a new challenging question-answer benchmark: METAQA (Movi E Text Audio QA). It contains more than 400K questions for both single and multi-hop reasoning, and provides more realistic text and audio versions. METAQA serves as a comprehensive extension of Wiki Movies. Due to the page limit, we briefly list the datasets included in METAQA below, and put more details in Appendix A4. 1Our new benchmark dataset collections METAQA are publicly available at https://goo.gl/f3Amc Y.
Dataset Splits Yes We follow the same split of train/validation/test for all datasets. The number of questions in each part is listed in Appendix (Table 3).
Hardware Specification No No specific hardware details (like GPU/CPU models, memory amounts, or detailed computer specifications) used for running experiments were provided in the main text.
Software Dependencies No We implement baseline methods with Tensorflow. (No version specified, and no other software dependencies with versions mentioned.)
Experiment Setup No We tune hyperparameters on validation set for all methods. (No specific hyperparameters, their values, or detailed training configurations are provided.)