Temporal Knowledge Graph Reasoning with Historical Contrastive Learning

Authors: Yi Xu, Junjie Ou, Hui Xu, Luoyi Fu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our proposed model on five benchmark graphs. The results demonstrate that CENET significantly outperforms all existing methods in most metrics, achieving at least 8.3% relative improvement of Hits@1 over previous state-of-the-art baselines on event-based datasets.
Researcher Affiliation Academia Department of Computer Science and Engineering Shanghai Jiao Tong University
Pseudocode Yes Algorithm 1: Learning algorithm of CENET
Open Source Code Yes All our datasets and codes are publicly available1. 1https://github.com/xyjigsaw/CENET
Open Datasets Yes All our datasets and codes are publicly available1. 1https://github.com/xyjigsaw/CENET
Dataset Splits Yes All datasets except ICEWS14 are split into training set (80%), validation set (10%), and testing set (10%).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., Python 3.8, PyTorch 1.9) needed to replicate the experiment.
Experiment Setup Yes As to model configurations, we set the batch size to 1024, embedding dimension to 200, learning rate to 0.001, and use Adam optimizer. The training epoch for L is limited to 30, and the epoch for the second stage of contrastive learning is limited to 20. The value of hyperparameter α is set to 0.2, and λ is set to 2.