Canonical Tensor Decomposition for Knowledge Base Completion

Authors: Timothee Lacroix, Nicolas Usunier, Guillaume Obozinski

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform all our experiments on 5 common benchmark datasets of link prediction in knowledge bases.
Researcher Affiliation Collaboration 1Facebook AI Research, Paris, France 2Université Paris-Est, Equipe Imagine, LIGM (UMR8049) Ecole des Ponts Paris Tech Marne-la-Vallée, France.
Pseudocode No No pseudocode or clearly labeled algorithm block found in the paper.
Open Source Code Yes The code is available at https://github.com/ facebookresearch/kbc.
Open Datasets Yes WN18 and FB15K are popular benchmarks in the Knowledge Base Completion community.
Dataset Splits Yes We used the train/valid/test splits provided with these datasets and measured the filtered Mean Reciprocal Rank (MRR) and Hits@10 (Bordes et al. (2013)).
Hardware Specification Yes We conducted all experiments on a Quadro GP 100 GPU.
Software Dependencies No The paper mentions optimizers like Adagrad and Adam but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes Our grid consisted of two learning rates: 10 1 and 10 2, two batch-sizes: 25 and 100, and regularization coefficients in [0, 10 3, 5.10 3, 10 2, 5.10 2, 10 1, 5.10 1]. ... We trained for 100 epochs to ensure convergence