Non-translational Alignment for Multi-relational Networks

Authors: Shengnan Li, Xin Li, Rui Ye, Mingzhong Wang, Haiping Su, Yingzi Ou

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The extensive experiments on four multi-lingual knowledge graphs demonstrate the effectiveness and robustness of the proposed method over a set of stateof-the-art alignment methods.
Researcher Affiliation Academia 1 School of Computer Science, Beijing Institute of Technology, China 2 School of Business, University of the Sunshine Coast, Australia
Pseudocode No The paper describes the model and inference process mathematically and textually, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing its own source code or a link to a repository for the described methodology.
Open Datasets Yes trilingual datasets WK31-15k and WK31-120k [Chen et al., 2017], including English(En), German(De) and French(Fr) knowledge graphs which are extracted from DBpedia with known aligned entities as ground truth.
Dataset Splits No The paper mentions 'training ratios' and 'test-to-training ratios' but does not explicitly specify a validation set or its split.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup Yes The embedding dimension is set as 100. [...] To infer the vector representations of networks, stochastic gradient descent is applied for optimization. [...] Negative sampling [Mikolov et al., 2013] is applied... [...] with training ratios as 80% for entity anchors, 100% for relation anchors