Anchoring Path for Inductive Relation Prediction in Knowledge Graphs

Authors: Zhixiang Su, Di Wang, Chunyan Miao, Lizhen Cui

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate APST on three public datasets and achieve state-of-the-art (SOTA) performance in 30 of 36 transductive, inductive, and few-shot experimental settings. We conduct extensive experiments using three datasets to comprehensively evaluate the performance of APST. Additionally, we perform an ablation study to assess the impact and effectiveness of APs and detailed descriptions.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Nanyang Technological University (NTU), Singapore 2Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY), NTU, Singapore 3We Bank-NTU Joint Research Institute on Fintech, NTU, Singapore 4SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University (SDU), China 5School of Software, SDU, China
Pseudocode No The paper describes the proposed method in text and with diagrams (e.g., Figure 2), but it does not contain any formal pseudocode or algorithm blocks.
Open Source Code Yes We implement APST1 based on the SOTA Sentence Transformer (all-mpnet-base-v22) using Py Torch and train it on two NVIDIA Tesla V100 GPUs with 32GB RAM. 1github.com/Zhixiang Su/APST
Open Datasets Yes We conduct experiments using the commonly benchmarked transductive and inductive datasets introduced by (Teru, Denis, and Hamilton 2020), which are the subsets of WN18RR, FB15k-237, and NELL-995.
Dataset Splits No The paper mentions 'training graph Gtrain' and 'testing graph Gtest' and refers to using subsets for few-shot experiments (e.g., '1000 (or 2000) training triplets'), but it does not explicitly provide details about a distinct validation set split or percentages for data partitioning (e.g., '80/10/10 split').
Hardware Specification Yes We implement APST1 based on the SOTA Sentence Transformer (all-mpnet-base-v22) using Py Torch and train it on two NVIDIA Tesla V100 GPUs with 32GB RAM.
Software Dependencies No We implement APST1 based on the SOTA Sentence Transformer (all-mpnet-base-v22) using Py Torch and train it on two NVIDIA Tesla V100 GPUs with 32GB RAM. (Lacks specific version for PyTorch)
Experiment Setup No The paper describes the loss function used (cosine embedding loss with margin M), the input sentence formulations, and the AP filtering mechanism with thresholds (AP accuracy, AP recall). However, it does not provide specific numerical values for common training hyperparameters such as learning rate, batch size, number of epochs, the specific value of margin M, or the filtering thresholds.