Learning Embeddings from Knowledge Graphs With Numeric Edge Attributes
Authors: Sumit Pai, Luca Costabello
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments We assess the predictive power of Focus E on the link prediction task with numeric-enriched triples. Experiments show that Focus E outperforms conventional KGE models and its closest direct competitor UKGE [Chen et al., 2019] in discriminating low-valued triples from high-valued ones. |
| Researcher Affiliation | Industry | Sumit Pai , Luca Costabello Accenture Labs {sumit.pai, luca.costabello}@accenture.com |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. |
| Open Source Code | Yes | Focus E and all baselines are implemented with the Ampli Graph library [Costabello et al., 2019] version 1.4.0, using Tensor Flow 1.15.2 and Python 3.7. Code and experiments are available at https://github.com/Accenture/Ampli Graph. |
| Open Datasets | Yes | We experiment with three publicly available benchmark datasets originally proposed by [Chen et al., 2019]. ... CN15K [Chen et al., 2019]. ... NL27K [Chen et al., 2019]. ... PPI5K [Szklarczyk et al., 2016]. ... O*NET20K3. We introduce a subset of O*NET 4... https://www.onetonline.org/ |
| Dataset Splits | Yes | Table 1: Datasets used in experiments. (...) validation sets only include high-valued triples where w >= 0.8. and "Validation 138 3532 8161 1940" in Table 1. |
| Hardware Specification | Yes | All experiments were run under Ubuntu 16.04 on an Intel Xeon Gold 6142, 64 GB, equipped with a Tesla V100 16GB. |
| Software Dependencies | Yes | Focus E and all baselines are implemented with the Ampli Graph library [Costabello et al., 2019] version 1.4.0, using Tensor Flow 1.15.2 and Python 3.7. |
| Experiment Setup | Yes | For each baseline and for Focus E, we carried out extensive grid search, over the following ranges of hyperparameter values: embedding dimensionality k = [200 600], with a step of 100; baseline losses={negative log-likelihood, multiclass-NLL, self-adversarial}; synthetic negatives ratio "eta" = {5, 10, 20, 30}; learning rate= {1e 3, 5e 3, 1e 4}; epochs= [100 800], step of 100; L3 regularizer, with weight "gamma" = {1e 1, 1e 2, 1e 3}. For Focus E we also tuned the decay "lambda" = [100 800], with increments of 100. |