Learning Rule-Induced Subgraph Representations for Inductive Relation Prediction
Authors: Tianyu Liu, Qitan Lv, Jie Wang, Shuling Yang, Hanzhu Chen
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on inductive relation prediction benchmarks demonstrate the effectiveness of our REST2. We conduct experiments on three inductive benchmark datasets proposed by Gra IL[11], which are dervied from WN18RR[32], FB15K-237[33], and NELL995[34]. |
| Researcher Affiliation | Academia | Tianyu Liu1 Qitan Lv1 Jie Wang1,2 Shuling Yang1 Hanzhu Chen1 1University of Science and Technology of China 2Institute of Artificial Intelligence, Hefei Comprehensive National Science Center {tianyu_liu, qitanlv, slyang0916, chenhz}@mail.ustc.edu.cn {jiewangx}@ustc.edu.cn |
| Pseudocode | No | The paper describes the proposed methods using mathematical formulations but does not provide any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/smart-lty/REST |
| Open Datasets | Yes | We conduct experiments on three inductive benchmark datasets proposed by Gra IL[11], which are dervied from WN18RR[32], FB15K-237[33], and NELL995[34]. For inductive relation prediction, the training set and testing set should have no overlapping entities. Details of the datasets are summarized in Appendix B. |
| Dataset Splits | No | The paper mentions 'training set and testing set' but does not explicitly describe a validation set split or its size/methodology for reproducibility. |
| Hardware Specification | Yes | In general, our proposed method is implemented in DGL[36] and Py Torch[35] and trained on single GPU of NVIDIA Ge Force RTX 3090. |
| Software Dependencies | No | The paper mentions using 'Py Torch[35] and DGL[36]' but does not provide specific version numbers for these software libraries, which is required for reproducibility. |
| Experiment Setup | Yes | We apply Adam optimizer[39] with an initial learning rate of 0.0005. Observing that batch size has little effect on the performance of the model, We adjust batch size as large as possible for different datasets to accelerate training. We use the binary cross entropy loss.The maximum number of training epochs is set to 10. During training, we add reversed edges to fully capture relevant rules. The number of hop h is set to 3 which is consistent with existing subgraph-based methods. We conduct grid search to obtain optimal hyperparameters, where we search subgraph types in {enclosing, unclosing}, embedding dimensions in {16, 32}, number of GNN layers in {3, 4, 5, 6} and dropout in {0, 0.1, 0.2}. |