Knowledge Graph Embedding by Normalizing Flows
Authors: Changyi Xiao, Xiangnan He, Yixin Cao
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate the effectiveness of introducing uncertainty and our model. The code is available at https://github.com/changyi7231/NFE. In this section, we first introduce the experimental settings and compare NFE with existing models. We then show the effectiveness of introducing uncertainty. Finally, we conduct ablation studies. Please see Appendix D for more experimental details. |
| Researcher Affiliation | Academia | Changyi Xiao1, Xiangnan He1*, Yixin Cao2 1School of Data Science, University of Science and Technology of China 2School of Computing and Information System, Singapore Management University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/changyi7231/NFE. |
| Open Datasets | Yes | We evaluate our model on three popular knowledge graph completion datasets, WN18RR (Dettmers et al. 2018), FB15k-237 (Toutanova et al. 2015) and YAGO3-10 (Dettmers et al. 2018). |
| Dataset Splits | Yes | We use the filtered MR, MRR and Hits@N (H@N) (Bordes et al. 2013) as evaluation metrics and choose the hyper-parameters with the best filtered MRR on the validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions using a binary classification loss function with reciprocal learning and a fixed margin γ, and states that hyperparameters are chosen based on the validation set. However, specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) are not provided in the main text, with further experimental details deferred to Appendix D. |