KBQA: An Online Template Based Question Answering System over Freebase
Authors: Wanyun Cui, Yanghua Xiao, Wei Wang
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show the number of predicates and templates KBQA learned in Table 1, with comparison to bootstrapping [4], which uses BOA patterns to represent questions. From the result, KBQA finds significantly more templates and predicates despite that the corpus size of bootstrapping is larger. This implies that KBQA is more effective: (1) the large number of templates ensures that KBQA understands diverse questions; (2) the large number of predicates ensures that KBQA understands diverse question intents. |
| Researcher Affiliation | Academia | Wanyun Cui, Yanghua Xiao , Wei Wang Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University wanyuncui1@gmail.com, shawyh@fudan.edu.cn, weiwang1@fudan.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is open-source or publicly available. |
| Open Datasets | Yes | KBQA learns the templates corresponding predicate through Yahoo! Answers, a large scale QA corpora consisting of millions of qa pairs. ... Corpus 41M QA pairs |
| Dataset Splits | No | The paper mentions using "qa pairs as the training data" from Yahoo! Answers but does not specify any training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions tools like "Stanford NER [3]" and "Probase [6]" but does not specify version numbers for these or any other software dependencies required to replicate the experiment. |
| Experiment Setup | No | The paper describes the system's architecture and general learning approach (e.g., using a maximum likelihood estimator) but does not provide specific experimental setup details such as hyperparameters, training configurations, or system-level settings. |