Learning to Exploit Long-term Relational Dependencies in Knowledge Graphs

Authors: Lingbing Guo, Zequn Sun, Wei Hu

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results showed that RSNs outperformed state-of-the-art embeddingbased methods for entity alignment and achieved competitive performance for KG completion.
Researcher Affiliation Academia 1State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu, China. Correspondence to: Wei Hu <whu@nju.edu.cn>.
Pseudocode Yes The detailed algorithm of the biased random walk sampling is shown in Appendix A.2.
Open Source Code Yes We implemented RSNs with Tensor Flow. The source code and datasets are accessible online.1
Open Datasets Yes We considered two benchmark datasets, namely FB15K and WN18, for KG completion (Bordes et al., 2013).
Dataset Splits No The paper mentions using "training and test sets" for KG completion and "seed alignment" for entity alignment, but does not provide explicit details about a separate validation split percentages or counts.
Hardware Specification No No specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running experiments are provided in the paper.
Software Dependencies No The paper mentions "Tensor Flow" but does not specify a version number or other software dependencies with their versions.
Experiment Setup Yes We set the embedding dimensions to 100 for FB15K and WN18, and 200 for the entity alignment datasets. The maximum length of relational paths is set to 15. The numbers of negative samples k is set to 20. The learning rate is set to 0.001. We use Adam (Kingma & Ba, 2015) as the optimizer. We train the RSNs with mini-batches, and the batch size is set to 256. For entity alignment, the training epoch is set to 1500 and for KG completion, it is 1000. We also use dropout (Srivastava et al., 2014) with probability of 0.5. For biased random walks, we set α = 0.8 and β = 0.8. We implement RSNs with three layers of LSTMs. We use batch normalization (Ioffe & Szegedy, 2015) during training.