Two-Phase Hypergraph Based Reasoning with Dynamic Relations for Multi-Hop KBQA

Authors: Jiale Han, Bo Cheng, Xu Wang

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on two widely used multi-hop KBQA datasets to prove the effectiveness of our model.
Researcher Affiliation Academia State Key Laboratory of networking and switching technology, Beijing University of Posts and Telecommunications {hanjl,chengbo,wxx}@bupt.edu.cn
Pseudocode No The paper describes the model in text and mathematical equations, but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes PQL [Zhou et al., 2018] is a single-answer KBQA dataset... Meta QA [Zhang et al., 2018] is a large-scale multi-answer KBQA dataset...
Dataset Splits No The paper mentions 'dev datasets' for PQL, but does not explicitly provide specific numerical train/validation/test dataset splits (percentages or sample counts) or references to predefined splits with full citation details for reproduction.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions models like BERT and TransE, and optimizers like Adam, but does not provide specific software dependency names with version numbers (e.g., Python 3.x, PyTorch 1.x) needed for replication.
Experiment Setup Yes Throughout our experiments, we apply 768 dimensional BERT embedding [...] and 400 dimensional Trans E embedding [...] The hidden size of LSTM and directed hypergraph convolutional networks are all set to 400. During training, the Adam optimizer [...] is employed to minimize the loss with a learning rate of 1e 4. λ and dropout are set to 1 and 0.4.