Explainable Subgraph Reasoning for Forecasting on Temporal Knowledge Graphs
Authors: Zhen Han, Peng Chen, Yunpu Ma, Volker Tresp
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our model on four benchmark temporal knowledge graphs for the link forecasting task. |
| Researcher Affiliation | Collaboration | Zhen Han 1,2, Peng Chen 2,3, Yunpu Ma 1 , Volker Tresp 1,2 1Institute of Informatics, LMU Munich 2 Corporate Technology, Siemens AG 3Department of Informatics, Technical University of Munich |
| Pseudocode | Yes | as shown in Algorithm 1 in the appendix. |
| Open Source Code | Yes | Code and datasets are available at https://github.com/TemporalKGTeam/xERTE |
| Open Datasets | Yes | Integrated Crisis Early Warning System (ICEWS) (Boschee et al., 2015) and YAGO (Mahdisoltani et al., 2013) have established themselves in the research community as benchmark datasets of temporal KGs. |
| Dataset Splits | Yes | We split quadruples of a temporal KG into train, validation, and test sets by timestamps, ensuring (timestamps of training set)<(timestamps of validation set)<(timestamps of test set). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper states "We implement our model and all baselines in Py Torch (Paszke et al., 2019)," but it does not specify explicit version numbers for PyTorch or any other software dependencies beyond citing the PyTorch paper from 2019. |
| Experiment Setup | Yes | We tune hyperparameters of our model using a grid search. We set the learning rate to be 0.0002, the batch size to be 128, the inference step L to be 3. |