Tensor Decompositions for Temporal Knowledge Base Completion
Authors: Timothée Lacroix, Guillaume Obozinski, Nicolas Usunier
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 EXPERIMENTAL RESULTSExperiments on available benchmarks show that our method outperforms the state of the art for similar number of parameters. We run additional experiments for larger, regularized models and obtain improvements of up to 0.07 absolute Mean Reciprocal Rank (MRR). |
| Researcher Affiliation | Collaboration | Timothee Lacroix1,2, Guillaume Obozinski3, Nicolas Usunier1 1 Facebook AI Research 2 ENPC 3 Swiss Data Science Center, EPFL & ETH Zürich |
| Pseudocode | No | The paper describes methods using mathematical equations and prose but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | In order to adress these limitations, we created our own dataset from Wikidata, which we make available along with the code for this paper at https: //github.com/facebookresearch/tkbc. |
| Open Datasets | Yes | In order to adress these limitations, we created our own dataset from Wikidata, which we make available along with the code for this paper at https: //github.com/facebookresearch/tkbc.The ICEWS datasets are samplings from the Integrated Conflict Early Warning System (ICEWS)(Boschee et al., 2015).The Yago15K dataset (García-Durán et al., 2018) is a modification of FB15k (Bordes et al., 2013). |
| Dataset Splits | Yes | Our train set contains 7M such tuples, with about 10% partially specified temporal tuples. We kept a validation and test set of size 50k each. |
| Hardware Specification | Yes | Training TNTCompl Ex on Wikidata with a rank of d = 100 with the full cross-entropy on a Quadro GP 100, we obtain a speed of 5.6k triples per second... |
| Software Dependencies | No | The paper mentions that models are "optimized with Adagrad (Duchi et al., 2011)" but does not specify any software versions for programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow). |
| Experiment Setup | Yes | All our models are optimized with Adagrad (Duchi et al., 2011), with a learning rate of 0.1, a batch-size of 1000. More details on the grid-search, actual ranks used and hyper-parameters are given in Appendix 8.7. |