KnowFormer: Revisiting Transformers for Knowledge Graph Reasoning

Authors: Junnan Liu, Qianren Mao, Weifeng Jiang, Jianxin Li

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate the superior performance of KNOWFORMER compared to prominent baseline methods on both transductive and inductive benchmarks.
Researcher Affiliation Collaboration 1Zhongguancun Laboratory, Beijing, P.R.China. 2SCSE, Beihang University, Beijing, P.R.China. 3SCSE, Nanyang Technological University, Singapore.
Pseudocode Yes Algorithm 1 Attention Computation
Open Source Code Yes Our code is available at https://github.com/ jnanliu/Know Former.
Open Datasets Yes We conduct experiments on four widely utilized transductive knowledge graph reasoning datasets: FB15k-237 (Toutanova & Chen, 2015), WN18RR (Dettmers et al., 2018), NELL-995 (Xiong et al., 2017) and YAGO3-10 (Mahdisoltani et al., 2015).
Dataset Splits Yes For each dataset, we performed hyperparameter tuning on the validation set.
Hardware Specification Yes CPU: Intel (R) Xeon (R) Platinum 8358 CPU @ 2.60GHz with 1TB DDR4 of Memory and Intel Xeon Gold 6148 CPU @ 2.40GHz with 384GB DDR4 of Memory. GPU: NVIDIA Tesla A100 SMX4 with 40GB of Memory and NVIDIA Tesla V100 SXM2 with 32GB of Memory.
Software Dependencies Yes Software: CUDA 12.1, Python 3.9.14, Py Torch (Paszke et al., 2019) 2.1.0.
Experiment Setup Yes We considered different values for the learning rate (lr) from the set {1e 4, 5e 4, 1e 3, 5e 3}, weight decay (wd) from the set {0, 1e 6, 1e 5, 1e 4}, hidden dimension (d) from the set {16, 32, 64}, number of negative samples (|[t ]|) from the set {26, 28, 210, 212, 214, 216}, number of layers for the query function (e L) from the set {1, 2, 3}, number of layers for the value function (b L) from the set {1, 2, 3}, and number of layers for KNOWFORMER (L) from the set {1, 2, 3}.