Dynamically Pruned Message Passing Networks for Large-scale Knowledge Graph Reasoning

Authors: Xiaoran Xu, Wei Feng, Yunsheng Jiang, Xiaohui Xie, Zhiqing Sun, Zhi-Hong Deng

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 EXPERIMENTS Datasets. We use six large KG datasets: FB15K, FB15K-237, WN18, WN18RR, NELL995, and YAGO3-10. Experimental settings. We use the same data split protocol as in many papers... Comparison results and analysis. We report comparison on FB15K-23 and WN18RR in Table 2. Our model DPMPN significantly outperforms all the baselines in HITS@1,3 and MRR.
Researcher Affiliation Collaboration Xiaoran Xu1, Wei Feng1, Yunsheng Jiang1, Xiaohui Xie1, Zhiqing Sun2, Zhi-Hong Deng3 1Hulu, {xiaoran.xu, wei.feng, yunsheng.jiang, xiaohui.xie}@hulu.com 2Carnegie Mellon University, zhiqings@andrew.cmu.edu 3Peking University, zhdeng@pku.edu.cn
Pseudocode No The paper describes the model architecture and operations mathematically and textually but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is written in Python based on Tensor Flow 2.0 and Num Py 1.16 and can be found by the link3 below. https://github.com/anonymousauthor123/DPMPN
Open Datasets Yes Datasets. We use six large KG datasets: FB15K, FB15K-237, WN18, WN18RR, NELL995, and YAGO3-10. FB15K-237 (Toutanova & Chen, 2015) is sampled from FB15K (Bordes et al., 2013)... WN18RR (Dettmers et al., 2018) is a subset of WN18 (Bordes et al., 2013)... NELL995 (Xiong et al., 2017)... YAGO3-10 (Mahdisoltani et al., 2014).
Dataset Splits Yes We use the same data split protocol as in many papers (Dettmers et al., 2018; Xiong et al., 2017; Das et al., 2018). We create a KG, a directed graph, consisting of all train triples and their inverse added for each dataset... Table 1: Statistics of the six KG datasets. #Train #Valid #Test
Hardware Specification Yes We run our experiments using a 12G-memory GPU, TITAN X (Pascal), with Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz.
Software Dependencies Yes Our code is written in Python based on Tensor Flow 2.0 and Num Py 1.16 and can be found by the link3 below.
Experiment Setup Yes See hyperparameter details in the appendix. Appendix 8 HYPERPARAMETER SETTINGS Table 3: Our standard hyperparameter settings we use for each dataset plus their one-epoch training time. For experimental analysis, we only adjust one hyperparameter and keep the remaining fixed as the standard setting. For NELL995, the one-epoch training time means the average time cost of the 12 single-query-relation tasks.