Inductive Relation Prediction by BERT

Authors: Hanwen Zha, Zhiyu Chen, Xifeng Yan5923-5931

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental BERTRL outperforms the SOTAs in 15 out of 18 cases in both inductive and transductive settings. Meanwhile, it demonstrates strong generalization capability in few-shot learning and is explainable. ... Empirical experiments on inductive knowledge graph completion benchmarks demonstrate the superior performance of BERTRL in comparison with state-of-the-art baselines: It achieves an absolute increase of 6.3% and 5.3% in Hits@1 and MRR on average.
Researcher Affiliation Academia Hanwen Zha, Zhiyu Chen, Xifeng Yan University of California, Santa Barbara {hwzha, zhiyuchen, xyan}@cs.ucsb.edu
Pseudocode No The paper includes a pipeline diagram (Figure 1) but no formal pseudocode blocks or algorithms.
Open Source Code Yes The data and code can be found at https://github.com/zhw12/BERTRL.
Open Datasets Yes We evaluate our method on three benchmark datasets: WN18RR (Dettmers et al. 2018), FB15k-237 (Toutanova et al. 2015), and NELL-995 (Xiong, Hoang, and Wang 2017), using their inductive and transductive subsets introduced by Gra IL(Teru, Denis, and Hamilton 2020) 1https://github.com/kkteru/grail
Dataset Splits No The paper states that 'The best learning rate and training epoch are selected based on validation set.' and provides table statistics for 'train' and 'ind-test' subsets. However, it does not provide explicit numerical proportions (e.g., 80/10/10 split) or absolute counts for the validation set.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory.
Software Dependencies No Both BERTRL and KG-BERT were implemented in Py Torch using Huggingface Transformers library (Wolf et al. 2020). However, specific version numbers for PyTorch or Huggingface Transformers are not provided.
Experiment Setup Yes Learning rate 5e5 is set for BERTRL and 2e-5 for KG-BERT, and training epoch is 2 and 5 respectively. We sample 10 negative triples in negative sampling, and 3 reasoning paths in path sampling, and keep increasing the size does not improve performance.