Multi-Aspect Explainable Inductive Relation Prediction by Sentence Transformer

Authors: Zhixiang Su, Di Wang, Chunyan Miao, Lizhen Cui

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on three real-world datasets. The experimental results show that compared to SOTA models, KRST achieves the best performance in most transductive and inductive test cases (4 of 6), and in 11 of 12 few-shot test cases.
Researcher Affiliation Collaboration 1School of Computer Science and Engineering, Nanyang Technological University (NTU), Singapore 2SDU-NTU Centre for Artiticial Intelligence Research (C-FAIR), Shandong University (SDU), China 3Joint NTU-We Bank Research Centre on Fintech, NTU, Singapore 4Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY), NTU, Singapore 5School of Software, SDU, China
Pseudocode Yes Algorithm 1: Path Extraction
Open Source Code Yes github.com/Zhixiang Su/KRST
Open Datasets Yes To evaluate the transductive and inductive performance of KRST, we use all three datasets adopted in (Zha, Chen, and Yan 2022), which were introduced by (Teru, Denis, and Hamilton 2020)3. These datasets are commonly adopted by various inductive approaches and they are the respective subsets of WN18RR, FB15k-237, and NELL-995. 3github.com/kkteru/grail
Dataset Splits No The paper states 'We only randomly generate negative triplets and use them for training and validation' and describes how paths are extracted for training and validation, but does not provide explicit percentages or counts for the dataset splits used for training, validation, and testing.
Hardware Specification Yes We implement KRST1 with a SOTA sentence transformer (all-mpnet-base-v22) on a Tesla V100 GPU with 16GB RAM.
Software Dependencies No The paper mentions using a 'SOTA sentence transformer (all-mpnet-base-v2)' but does not specify the version numbers for the sentence-transformer library itself, Python, PyTorch, or other key software dependencies.
Experiment Setup No The paper mentions 'maximum search depth L and the maximum number of paths per triplet M' as parameters but does not provide their specific values or other common experimental setup details like learning rates, batch sizes, or optimizer settings.