Knowledge Base Question Answering with Topic Units
Authors: Yunshi Lan, Shuohang Wang, Jing Jiang
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three commonly used benchmark datasets show that our method consistently works well and outperforms the previous state of the art on two datasets. |
| Researcher Affiliation | Academia | Yunshi Lan , Shuohang Wang and Jing Jiang School of Information System, Singapore Management University {yslan.2015, shwang.2014}@phdis.smu.edu.sg, jingjiang@smu.edu.sg |
| Pseudocode | Yes | Although these are two separate steps, we jointly learn the parameters in an end-to-end manner. (See Algorithm 1.) |
| Open Source Code | No | The paper does not provide a statement about releasing its source code or a link to a code repository. |
| Open Datasets | Yes | We evaluate our KBQA method on three benchmark datasets. Web Questions SP (WQSP): This is a dataset that has been widely used for KBQA [Yih et al., 2016]. Complex Web Questions (CWQ): This dataset was introduced by Talmor and Berant [2018]... Simple Questions (SQ): This is another popularly used KBQA dataset, introduced by Bordes et al. [2015]. For WQSP and CWQ, the knowledge base used is the entire Freebase. For SQ, the knowledge base used is a subset of Freebase that comes with the SQ dataset, which is called FB2M. |
| Dataset Splits | Yes | Web Questions SP (WQSP): It contains 2848 training questions, 250 development questions and 1639 test questions. Complex Web Questions (CWQ): This dataset was introduced by Talmor and Berant [2018] with the intention to create more complex questions from the Web Questions SQ dataset. CWQ contains 27K, 3K and 3K questions for training, development and test, respectively. Simple Questions (SQ): This is another popularly used KBQA dataset... SQ contains 76K, 11K and 21K for training, development and test, respectively. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU or CPU models used for the experiments. |
| Software Dependencies | No | The paper mentions software like S-MART, GloVe, and Adam optimizer, but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | We use Adam optimizer with an initial learning rate of 0.001. All hidden vectors are 200-dimensional. All hyper-parameters are turned on the development data. ... We set K to be 3. |