Logic and Commonsense-Guided Temporal Knowledge Graph Completion

Authors: Guanglin Niu, Bo Li

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results of TKGC task illustrate the significant performance improvements of our model compared with the existing approaches. The experimental results on three benchmark datasets of TKGs illustrate the significant performance improvements of our model compared with several state-of-the-art baselines.
Researcher Affiliation Academia Guanglin Niu1, Bo Li1,2* 1 Institute of Artificial Intelligence, Beihang University, Beijing, China 2 Hangzhou Innovation Institute, Beihang University, Hangzhou, China {beihangngl, boli}@buaa.edu.cn
Pseudocode No The paper states “The entire algorithm of our proposed temporal rule learning module is presented in Appendix.”, but the appendix content is not included in the provided text. No pseudocode or algorithm block is present in the main body of the text.
Open Source Code Yes The appendix, source code and datasets of this paper are available at https://github.com/ngl567/LCGE.
Open Datasets Yes Three commonly used datasets of TKGs are used in the experiments, namely ICEWS14 (Garc ıa-Dur an, Dumanˇci c, and Niepert 2018), ICEWS05-15 (Garc ıa-Dur an, Dumanˇci c, and Niepert 2018) and Wikidata12k (Dasgupta, Ray, and Talukdar 2018).
Dataset Splits Yes For each dataset, all the events are split into training, validation and test sets in a proportion of 80%/10%/10% following some previous works (Lacroix, Obozinski, and Usunier 2020; Xu et al. 2020a). We tune all the other hyper-parameters by grid search on the validation sets.
Hardware Specification Yes We conduct all the experiments in Pytorch and on a Ge Force GTX 2080Ti GPU.
Software Dependencies No The paper mentions “Pytorch” and “AMIE+” as software used but does not provide specific version numbers for these dependencies (e.g., “Pytorch 1.9” or “AMIE+ 3.0”).
Experiment Setup Yes The batch size is set as 1024. The thresholds of SC and HC in our temporal rule learning algorithm are both fixed to 0.1 on all the datasets. We tune all the other hyper-parameters by grid search on the validation sets. Besides, our model is trained with the Adam optimizer (Kingma and Ba 2015) to learn the embeddings of entities, predicates, concepts and timestamps.