ChronoR: Rotation Based Temporal Knowledge Graph Embedding
Authors: Ali Sadeghian, Mohammadreza Armandpour, Anthony Colas, Daisy Zhe Wang6471-6479
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, we show that Chrono R is able to outperform many of the state-of-the-art methods on the benchmark datasets for temporal knowledge graph link prediction. |
| Researcher Affiliation | Academia | 1 University of Florida 2 Texas A&M University {asadeghian, acolas1, daisyw}@ufl.edu, armand@stat.tamu.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The source code to reproduce the full experimental results will be made public on Git Hub. |
| Open Datasets | Yes | We evaluate our model on three popular benchmarks for Temporal Knowledge graph completion, namely ICEWS14, ICEWS05-15, and Yago15K. ... To create YAGO15K, Garcia-Duran, Dumanˇci c, and Niepert (2018) aligned the entities in FB15K (Bordes et al. 2013) with those from YAGO, which contains temporal information. |
| Dataset Splits | Yes | We tune all the hyperparameters using a grid search and each dataset s provided validation set. |
| Hardware Specification | Yes | We implemented all our models in Pytorch and trained on a single Ge Force RTX 2080 GPU. |
| Software Dependencies | No | The paper mentions 'Pytorch' but does not provide a specific version number for it or any other software dependency. |
| Experiment Setup | Yes | We tune all the hyperparameters using a grid search and each dataset s provided validation set. We tune λ1 and λ2 from {10i| 3 i 1} and the ratio of nr nτ from [0.1, 0.9] with 0.1 increments. For a fair comparison, we do not tune the embedding dimension; instead, in each experiment we choose n such that our models have an equal number of parameters to those used in (Lacroix, Obozinski, and Usunier 2020). ... Training was done using mini-batch stochastic gradient descent with Ada Grad and a learning rate of 0.1 with a batch size of 1000 quadruples. |