Interpreting Knowledge Graph Relation Representation from Word Embeddings
Authors: Carl Allen, Ivana Balazevic, Timothy Hospedales
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that empirical properties of relation representations and the relative performance of leading knowledge graph representation methods are justified by our analysis. |
| Researcher Affiliation | Collaboration | 1 University of Edinburgh, UK 2 Samsung AI Centre, Cambridge, UK |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about the release of source code or links to a code repository. |
| Open Datasets | Yes | Table 2: Categorisation of WN18RR relations. Analysing the commonly used FB15k-237 dataset (Toutanova et al., 2015) reveals relations to be almost exclusively of type C, precluding a contrast of performance per relation type and hence that dataset is omitted from our analysis. Instead, we categorise a random subsample of 12 relations from the NELL-995 dataset (Xiong et al., 2017), containing 75,492 entities and 200 relations (see Tables 8 and 9 in Appx. B). |
| Dataset Splits | Yes | To ensure a fair representation of all training set relations in the validation and test sets, we create new validation and test set splits by combining the initial validation and test sets with the training set and randomly selecting 10,000 triples each from the combined dataset. |
| Hardware Specification | No | The paper does not provide any specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper mentions "Py Torch" and "Adam optimizer" but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | All algorithms are re-implemented in Py Torch with the Adam optimizer (Kingma & Ba, 2015) that minimises binary cross-entropy loss, using hyper-parameters that work well for all models (learning rate: 0.001, batch size: 128, number of negative samples: 50). Entity and relation embedding dimensionality is set to de =dr =200 for all models except Tuck ER, for which dr =30 (Balaževi c et al., 2019b). |