Temporal Knowledge Graph Completion Using Box Embeddings

Authors: Johannes Messner, Ralph Abboud, Ismail Ilkan Ceylan7779-7787

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we conduct a detailed experimental evaluation, and show that Box TE achieves state-of-the-art performance on several TKGC baselines, even with a limited number of parameters.
Researcher Affiliation Academia Johannes Messner, Ralph Abboud, Ismail Ilkan Ceylan Department of Computer Science University of Oxford, UK messnerjo@gmail.com, {ralph.abboud, ismail.ceylan}@cs.ox.ac.uk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper states: 'The full version of this work, including all proofs and experimental details, is available on arXiv (Messner, Abboud, and Ceylan 2021).', which points to a preprint, not a code repository for the methodology.
Open Datasets Yes We evaluate Box TE on TKG benchmarks ICEWS14, ICEWS5-15 (Garc ıa-Dur an, Dumanˇci c, and Niepert 2018), and GDELT (Leetaru and Schrodt 2013).
Dataset Splits Yes We run experiments both in the standard temporal graph completion setting, and with the recently proposed boundedparameter setting (Lacroix, Obozinski, and Usunier 2020)... We additionally conduct experiments using the temporal smoothness regularizer from TNTCompl Ex (Lacroix, Obozinski, and Usunier 2020)... We experiment with k values in the set {2, 3, 5}, and use validation to tune embedding dimensionality d, training batch size, and the number of negative samples... The empirical results for the standard TKGC setting can be found in Table 3.
Hardware Specification No Experiments were conducted on the Advanced Research Computing (ARC) cluster administered by the University of Oxford (Richards 2015). This mentions a cluster but does not provide specific hardware details like GPU/CPU models or memory.
Software Dependencies No The paper mentions using 'Adam optimizer (Kingma and Ba 2015)' and 'cross-entropy loss' but does not specify version numbers for any software libraries, frameworks, or programming languages (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes We experiment with k values in the set {2, 3, 5}, and use validation to tune embedding dimensionality d, training batch size, and the number of negative samples... Finally, we train Box TE with cross-entropy loss, and use the Adam optimizer (Kingma and Ba 2015) (learning rate 10 3). Full details and explanations about hyper-parameter setup are provided in the appendix of the full version.