Quaternion Knowledge Graph Embeddings
Authors: SHUAI ZHANG, Yi Tay, Lina Yao, Qi Liu
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our method achieves state-of-the-art performance on four wellestablished knowledge graph completion benchmarks. |
| Researcher Affiliation | Academia | University of New South Wales ψNanyang Technological University, φUniversity of Oxford |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | Datasets Description: We conducted experiments on four widely used benchmarks, WN18, FB15K, WN18RR and FB15K-237, of which the statistics are summarized in Table 2. |
| Dataset Splits | Yes | The best models are selected by early stopping on the validation set. Table 2 also includes a '#validation' column with specific counts for each dataset. |
| Hardware Specification | No | The paper only vaguely mentions 'tested it on a single GPU' without providing any specific model numbers or hardware details. |
| Software Dependencies | No | The paper states 'We implemented our model using pytorch4', but 'pytorch4' refers to footnote 4 (https://pytorch.org/) and does not specify a version number. |
| Experiment Setup | Yes | The embedding size k is tuned amongst {50, 100, 200, 250, 300}. Regularization rate λ1 and λ2 are searched in {0, 0.01, 0.05, 0.1, 0.2}. Learning rate is fixed to 0.1 without further tuning. The number of negatives (#neg) per training sample is selected from {1, 5, 10, 20}. We create 10 batches for all the datasets. |