Diachronic Embedding for Temporal Knowledge Graph Completion
Authors: Rishab Goel, Seyed Mehran Kazemi, Marcus Brubaker, Pascal Poupart3988-3995
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments indicate the superiority of our proposal compared to existing baselines. Our experiments indicate the superiority of our proposal compared to existing baselines. |
| Researcher Affiliation | Industry | Rishab Goel, Seyed Mehran Kazemi,* Marcus Brubaker, Pascal Poupart Borealis AI {rishab.goal, mehran.kazemi, marcus.brubaker, pascal.poupart}@borealis.com |
| Pseudocode | No | The paper provides mathematical equations and definitions but no distinct pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and datasets: https://github.com/Borealis AI/DE-Simpl E. |
| Open Datasets | Yes | Our datasets are subsets of two temporal KGs that have become standard benchmarks for TKGC: ICEWS (Boschee et al. 2015) and GDELT (Leetaru and Schrodt 2013). |
| Dataset Splits | Yes | Table 1 provides a summary of the dataset statistics. We changed the train/validation/test sets following a similar procedure as in (Bordes et al. 2013) to make the problem into a TKGC rather than an extrapolation problem. |
| Hardware Specification | No | The paper states, 'We ran our experiments on a node with four GPUs.' This does not provide specific model numbers for the GPUs, CPU, or memory details. |
| Software Dependencies | No | The paper states, 'We implemented our model and the baselines in Py Torch (Paszke et al. 2017).' It mentions PyTorch but does not specify a version number for it or any other software dependencies. |
| Experiment Setup | Yes | For the other experiments on these datasets, for the fairness of results, we follow a similar experimental setup as in (Garc ıa-Dur an, Dumanˇci c, and Niepert 2018) by using the ADAM optimizer (Kingma and Ba 2014) and setting learning rate = 0.001, batch size = 512, negative ratio = 500, embedding size = 100, and validating every 20 epochs selecting the model giving the best validation MRR. Following the best results obtained in (Ma, Tresp, and Daxberger 2018) (and considering the memory restrictions), for Con T we set embedding size = 40, batch size = 32 on ICEWS14 and GDELT and 16 on ICEWS05-15. We validated dropout values from {0.0, 0.2, 0.4}. We tuned γ for our model from the values {16, 32, 64}. For GDELT, we used a similar setting but with a negative ratio = 5 due to the large size of the dataset. |