GaussianPath:A Bayesian Multi-Hop Reasoning Framework for Knowledge Graph Reasoning
Authors: Guojia Wan, Bo Du4393-4401
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted extensive experiments on multiple KGs. Experimental results show a superior performance than other baselines, especially significant improvements on the automated extracted KG. |
| Researcher Affiliation | Academia | 1National Engineering Research Center for Multimedia Software, Wuhan University, China. 2Institute of Artificial Intelligence, School of Computer Science, Wuhan University, China. 3Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, China. |
| Pseudocode | Yes | Algorithm 1 Gaussian Path |
| Open Source Code | Yes | Our code is available at https://github.com/BromothymolBlue/Gaupa. |
| Open Datasets | Yes | Datasets We performed experiments on five benchmark datasets: FB15K237(Dettmers et al. 2018), WN18RR(Dettmers et al. 2018), NELL995(Xiong, Hoang, and Wang 2017), UMLS(Das et al. 2018) and Kinship(Das et al. 2018). |
| Dataset Splits | No | The paper states it applies 'standard knowledge graph completion tasks following previous work (Das et al. 2018)' but does not provide specific percentages or counts for train/validation/test splits or reference specific predefined splits for reproducibility. |
| Hardware Specification | No | The numerical calculations in this paper have been done on the supercomputing system in the Supercomputing Center of Wuhan University. This is too vague and lacks specific hardware details such as CPU/GPU models or memory. |
| Software Dependencies | No | The paper refers to a 'public implementation' for Bayesian LSTM but does not provide specific version numbers for software dependencies like Python, PyTorch, or TensorFlow. |
| Experiment Setup | Yes | Hyper-parameters Settings We set the dimension d of mean µ and Σ to 100. The layer number of Bayesian LSTM is 1. The size of Hidden layer is 200. Learning rate η is 0.01. cmin/cmax is 0.01/0.4. Other hyper-parameters settings are available in supplementary materials. |