MINES: Message Intercommunication for Inductive Relation Reasoning over Neighbor-Enhanced Subgraphs
Authors: Ke Liang, Lingyuan Meng, Sihang Zhou, Wenxuan Tu, Siwei Wang, Yue Liu, Meng Liu, Long Zhao, Xiangjun Dong, Xinwang Liu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments prove the promising capacity of the proposed MINES from various aspects, especially for the superiority, effectiveness, and transfer ability. We implement MINES based on the prototype Gra IL model (Teru, Denis, and Hamilton 2020), and experiments are conducted based on a single NVIDIA TITAN XP. |
| Researcher Affiliation | Academia | 1School of Computer, National University of Defense Technology, Changsha, China 2School of Intelligence Science and Technology, National University of Defense Technology, Changsha, China 3Intelligent Game and Decision Lab, Beijing, China 4Qilu University of Technology, Jinan, China |
| Pseudocode | No | The paper describes its method using diagrams and mathematical formulations, but it does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement or a link providing concrete access to the source code for the methodology described. |
| Open Datasets | Yes | Most KG datasets are originally created for transductive settings. To evaluate the inductive ability, 12 datasets based on FB15K-237, NELL-995, and WN18RR, which contain v1, v2, v3, v4 subsets (Teru, Denis, and Hamilton 2020). |
| Dataset Splits | No | The paper mentions using well-known datasets but does not explicitly provide specific details about training, validation, and test splits (e.g., percentages or sample counts), nor does it explicitly mention a validation set proportion. |
| Hardware Specification | Yes | We implement MINES based on the prototype Gra IL model (Teru, Denis, and Hamilton 2020), and experiments are conducted based on a single NVIDIA TITAN XP. |
| Software Dependencies | No | The paper mentions architectural components like RGCN and GCN layers but does not specify any software libraries (e.g., PyTorch, TensorFlow) or their version numbers used in the implementation. |
| Experiment Setup | Yes | We select the 3-layer model (i.e., UD-MP+BD-MP+UD-MP) and 3-hop extracted subgraphs as same as the prototype Gra IL to compare with SOTA models fairly. Besides, the dimension of the feature representation and dropout rate is set to 32 and 0.5 separately. Moreover, the batchsize and the margin parameter γ are set to 16 and 10 separately. |