TEILP: Time Prediction over Knowledge Graphs via Logical Reasoning

Authors: Siheng Xiong, Yuan Yang, Ali Payani, James C Kerce, Faramarz Fekri

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare TEILP with state-of-the-art methods on five benchmark datasets. We show that our model achieves a significant improvement over baselines while providing interpretable explanations. In particular, we consider several scenarios where training samples are limited, event types are imbalanced, and forecasting the time of future events based on only past events is desired. In all these cases, TEILP outperforms state-of-the-art methods in terms of robustness. ... Experiments Datasets We evaluate the proposed method TEILP on five benchmark temporal knowledge graph datasets: WIKIDATA12k, YAGO11k (Dasgupta, Ray, and Talukdar 2018), ICEWS14, ICEWS05-15 (Garc ıa-Dur an, Dumanˇci c, and Niepert 2018), and GDELT100 (Leetaru and Schrodt 2013). ... Evaluation Metrics For interval-based datasets, we adopt a new evaluation metric ae IOU, proposed by (Jain et al. 2020). ... The results of the experiments are shown in Table 1, where TEILP outperforms all baselines with respect to all metrics.
Researcher Affiliation Collaboration 1 Georgia Institute of Technology, 225 North Avenue, Atlanta, Georgia 30332 USA 2 Cisco Systems Inc, 300 East Tasman Dr, San Jose, California 95134 USA {sxiong45, yyang754}@gatech.edu, apayani@cisco.com, clayton.kerce@gtri.gatech.edu, faramarz.fekri@ece.gatech.edu
Pseudocode Yes Algorithm 1 : Rule Learning ... Algorithm 2 : Rule Application
Open Source Code Yes 1Code and data available at https://github.com/xiongsiheng/ TEILP.
Open Datasets Yes We evaluate the proposed method TEILP on five benchmark temporal knowledge graph datasets: WIKIDATA12k, YAGO11k (Dasgupta, Ray, and Talukdar 2018), ICEWS14, ICEWS05-15 (Garc ıa-Dur an, Dumanˇci c, and Niepert 2018), and GDELT100 (Leetaru and Schrodt 2013). ... In the supplementary material, we provide a detailed introduction and dataset statistics.
Dataset Splits Yes To ensure a fair comparison, we use the split provided by (Jain et al. 2020) for WIKIDATA12k, YAGO11k, ICEWS14, ICEWS05-15 datasets and (Goel et al. 2020) for GDELT dataset. ... In the low-data scenario, given the same validation and test set, we change the size of the training set over a broad range.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup No The paper mentions some settings like 'The maximum rule length of our method is set to 5 for YAGO11k, and 3 for the others' and that 'During training, functions f( ) will be fitted, and weights a and w will be learned' and 'Training of the model is to minimize the log-likelihood loss', but it does not provide detailed hyperparameters (e.g., learning rate, batch size, optimizer details) or a dedicated 'Experimental Setup' section.