Local and Global: Temporal Question Answering via Information Fusion

Authors: Yonghao Liu, Di Liang, Mengyu Li, Fausto Giunchiglia, Ximing Li, Sirui Wang, Wei Wu, Lan Huang, Xiaoyue Feng, Renchu Guan

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on two benchmarks, and LGQA significantly outperforms previous state-of-the-art models, especially in difficult questions.
Researcher Affiliation Collaboration 1The Key Laboratory for Symbol Computation and Knowledge Engineering of the Ministry of Education, College of Computer Science and Technology, Jilin University 2Center for Natural Language Processing, Meituan Inc 3University of Trento
Pseudocode No The paper describes its model architecture and components with formulas and diagrams, but it does not contain any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes We employ two temporal KGQA benchmarks, i.e., CRONQUESTIONS [Saxena et al., 2021] and Time Questions [Jia et al., 2021].
Dataset Splits Yes Category Train Dev Test Simple Entity 90,651 7,745 7,812 Simple Time 61,471 5,197 5,046 Before/After 23,869 1,982 2,151 First/Last 118,556 11,198 11,159 Time Join 55,453 3,878 3,832 Simple Reasoning 152,122 12,942 12,858 Complex Reasoning 197,878 17,058 17,142 Entity Answer 225,672 19,362 19,524 Time Answer 124,328 10,638 10,476 Total 350,000 30,000 30,000
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or specific computing environments) used for running the experiments.
Software Dependencies No The paper mentions various models and methods (e.g., BERT, Adam methods) but does not specify any software dependency versions (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup Yes We set the weighted coefficient in the KG encoder stage as λ = 0.5. In the second stage, we extract a 3-hop sub-graph of the question, i.e., æ=3. Moreover, the hop is set to ℵ= 3. We perform 2-layer GNNs to obtain the updated node embeddings, i.e., L = 2. Furthermore, we use 3-layer Transformers with 4 heads per layer in the knowledge fusion layer Φ( ). We train our model for 20 epochs with Adam methods, and the validation performance determines its final parameters.