Know-Evolve: Deep Temporal Reasoning for Dynamic Knowledge Graphs

Authors: Rakshit Trivedi, Hanjun Dai, Yichen Wang, Le Song

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate significantly improved performance over various relational learning approaches on two large scale real-world datasets. Further, our method effectively predicts occurrence or recurrence time of a fact which is novel compared to prior reasoning approaches in multirelational setting. The large-scale experiments on two real world datasets show that our framework has consistently and significantly better performance for link prediction than stateof-arts that do not account for temporal and evolving non-linear dynamics.
Researcher Affiliation Academia 1College of Computing, Georgia Institute of Technology. Correspondence to: Rakshit Trivedi <rstrivedi@gatech.edu>, Le Song <lsong@cc.gatech.edu>.
Pseudocode Yes To address this challenge, we design an efficient Global BPTT algorithm (Algorithm 2, Appendix A) that creates mini-batches of events over global timeline in sliding window fashion and allows to capture dependencies across batches while retaining efficiency. Algorithm 1 presents the survival loss computation procedure.
Open Source Code No The paper does not provide a concrete access link or explicit statement for open-source code for the methodology.
Open Datasets Yes We use two datasets: Global Database of Events, Language, and Tone (GDELT) (Leetaru & Schrodt, 2013) and Integrated Crisis Early Warning System (ICEWS) (Boschee et al., 2017) which has recently gained attention in learning community (Schein et al., 2016) as useful temporal KGs.
Dataset Splits No We therefore partition our test set in 12 different slides and report results in each window. For both datasets, each slide included 2 weeks of time.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No Appendix C provides implementation details of our method and competitors. The provided text does not specify software dependencies with version numbers.
Experiment Setup No In our experiments, we choose d = l and d = c but they can be chosen differently. The paper mentions tanh as the activation function but does not provide specific hyperparameter values like learning rate, batch size, or optimizer settings.