Holographic Embeddings of Knowledge Graphs
Authors: Maximilian Nickel, Lorenzo Rosasco, Tomaso Poggio
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, we show that holographic embeddings are able to outperform state-of-the-art methods for link prediction on knowledge graphs and relational learning benchmark datasets. |
| Researcher Affiliation | Academia | Maximilian Nickel1,2 and Lorenzo Rosasco1,2,3 and Tomaso Poggio1 1Laboratory for Computational and Statistical Learning and Center for Brains, Minds and Machines Massachusetts Institute of Technology, Cambridge, MA 2Istituto Italiano di Tecnologia, Genova, Italy 3DIBRIS, Universita Degli Studi Di Genova, Italy |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code for models and experiments used in this paper is available at https://github.com/mnick/holographic-embeddings. |
| Open Datasets | Yes | WN18 Word Net is a KG that groups words into synonyms and provides lexical relationships between words... FB15k Freebase is a large knowledge graph... For both datasets we used the fixed training-, validation-, and test-splits provided by Bordes et al. (2013). |
| Dataset Splits | Yes | For both datasets we used the fixed training-, validation-, and test-splits provided by Bordes et al. (2013). |
| Hardware Specification | Yes | On standard hardware (Intel Core(TM) i7U 2.1GHz) and for d = 150 (as used in the experiments) the runtime to compute the probability of a single triple is around 40μs. |
| Software Dependencies | No | No specific software versions (e.g., library or solver names with version numbers) are provided. |
| Experiment Setup | No | The paper mentions using SGD with Ada Grad and ranking loss (eq. 3) and optimizing hyperparameters via extensive grid search, but does not provide the specific hyperparameter values or detailed training configurations (e.g., learning rate, batch size, number of epochs). |