Dynamic Knowledge Graph Alignment

Authors: Yuchen Yan, Lihui Liu, Yikun Ban, Baoyu Jing, Hanghang Tong4564-4572

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive evaluations are conducted on the benchmark DBP15K (Sun, Hu, and Li 2017) datasets. In the static setting, the proposed DINGAL-B model consistently outperforms 14 state-of-the-art methods. In the dynamic setting, the proposed DINGALO and DINGAL-U are (1) 10 faster and better than the existing static alignment methods; and (2) 10 to 100 faster than their static counterpart (DINGAL-B) with little alignment accuracy loss.
Researcher Affiliation Academia Yuchen Yan, Lihui Liu, Yikun Ban, Baoyu Jing, Hanghang Tong University of Illinois at Urbana-Champaign, Urbana, IL, USA {yucheny5, lihuil2, yikunb5, baoyuj, htong}@illinois.edu
Pseudocode No The paper describes the algorithms using mathematical equations and prose, but it does not include a clearly labeled pseudocode block or algorithm section.
Open Source Code No The paper states: "The implementation of the first 6 baseline methods are from EAkit, an open-source entity alignment toolkit." This refers to the baselines, not the authors' own implementation of DINGAL.
Open Datasets Yes We use DBP15K (Sun, Hu, and Li 2017) benchmark datasets, including DBP15KZH EN, DBP15KJA EN and DBP15KF R EN built on Chinese, English, Japanese and French versions of DBpedia. Each dataset provides two KGs in different languages with 15K pre-aligned entity pairs.
Dataset Splits No The paper mentions 30% for training and 70% for test splits in static and dynamic settings, but does not explicitly define a validation split or how it's used for reproduction.
Hardware Specification Yes The experiment are run on a 1080Ti GPU.
Software Dependencies No The paper does not specify the versions of any software dependencies (e.g., Python, PyTorch, TensorFlow, etc.) used for the experiments.
Experiment Setup Yes The epoch number is set as 1500. The number of negative samples for each positive sample is 125. The learning rate is 0.001. We use a two-layer DINGAL-B in the experiment. The margin hyper-parameter γ in Equation (7) is 1. The embedding dimension is 300.