Inductive Relation Prediction by Subgraph Reasoning
Authors: Komal Teru, Etienne Denis, Will Hamilton
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments on three benchmark knowledge completion datasets: WN18RR (Dettmers et al., 2018), FB15k-237 (Toutanova et al., 2015), and NELL-995 (Xiong et al., 2017) (and other variants derived from them). Our empirical study is motivated by the following questions: 1. Inductive relation prediction. (...) 2. Transductive relation prediction. (...) 3. Ablation study. |
| Researcher Affiliation | Academia | Komal K. Teru 1 2 Etienne G. Denis 1 2 William L. Hamilton 1 2 1Mc Gill University 2Mila. Correspondence to: Komal K. Teru <komal.teru@mail.mcgill.ca>. |
| Pseudocode | No | The paper describes the model details and training regime using descriptive text and mathematical equations, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and the data for all the following experiments is available at: https://github.com/kkteru/grail. |
| Open Datasets | Yes | We perform experiments on three benchmark knowledge completion datasets: WN18RR (Dettmers et al., 2018), FB15k-237 (Toutanova et al., 2015), and NELL-995 (Xiong et al., 2017) (and other variants derived from them). |
| Dataset Splits | Yes | For NELL-995, we split the whole dataset into train/valid/test set by the ratio 70/15/15, making sure all the entities and relations in the valid and test splits occur at least once in the train set. |
| Hardware Specification | No | The paper does not specify the hardware used for experiments, such as specific GPU or CPU models, memory, or cloud computing instances with detailed specifications. |
| Software Dependencies | No | The paper does not provide specific software dependencies or their version numbers, such as programming language versions or library versions (e.g., Python 3.x, PyTorch 1.x, CUDA 11.x). |
| Experiment Setup | Yes | We employ a 3-layer GNN with the dimension of all latent embeddings equal to 32. The basis dimension is set to 4 and the edge dropout rate to 0.5. |