Temporal Knowledge Graph Extrapolation via Causal Subhistory Identification

Authors: Kai Chen, Ye Wang, Xin Song, Siwei Chen, Han Yu, Aiping Li

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the remarkable potential of our CSI in the following aspects: superiority, improvement, explainability, and robustness.
Researcher Affiliation Academia 1National University of Defense Technology, Changsha, China 2Defense Innovation Institute, Beijing, China {chenkai , ye.wang, songxin, yuhan17, liaiping}@nudt.edu.cn, chensiwei1257@163.com
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not include any statement about releasing source code or provide a link to a code repository.
Open Datasets Yes Benchmark Datasets Four TKGR benchmark datasets are leveraged to evaluate our CSI, including ICEWS14* [Han et al., 2021a], ICEWS0515 [Garc ıa-Dur an et al., 2018], WIKI [Leblay and Chekol, 2018], and GDELT [Jin et al., 2020]. Details of the four datasets we use are shown in Table 2.
Dataset Splits Yes Table 2: Statistics of the datasets. ... Train 1734,399 63,685 539,286 368,868 Validation 238,765 13,823 67,538 46,302 Test 305,241 13,222 63,110 46,159
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not specify version numbers for any software dependencies, libraries, or frameworks used in the experiments.
Experiment Setup No The paper mentions that "λ1 and λ2 are hyper-parameters that control the strength of disentanglement and causal intervention" but does not provide their specific numerical values or other key hyperparameters like learning rate, batch size, or optimizer settings used in the experimental setup.