Knowledge Graph Alignment Network with Gated Multi-Hop Neighborhood Aggregation
Authors: Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, Yuzhong Qu222-229
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform thorough experiments with detailed ablation studies and analyses on five entity alignment datasets, demonstrating the effectiveness of Ali Net. |
| Researcher Affiliation | Collaboration | 1State Key Laboratory for Novel Software Technology, Nanjing University, China 2Department of Computer Science, University of California, Los Angeles, USA 3Alibaba Group, China |
| Pseudocode | No | The paper describes the model architecture and equations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code of Ali Net is accessible online1. 1https://github.com/nju-websoft/Ali Net |
| Open Datasets | Yes | DBP15K (Sun, Hu, and Li 2017) has three datasets built from multi-lingual DBpedia, namely DBPZH-EN (Chinese English), DBPJA-EN (Japanese-English) and DBPFR-EN (French-English). ... DWY100K (Sun et al. 2018) are extracted from DBpedia, Wikidata and YAGO3. |
| Dataset Splits | No | Following the latest progress (Sun et al. 2018; Cao et al. 2019), we use the following datasets and training-test splits. ... We use early stopping to terminate training based on the Hits@1 performance with a patience of 5 epochs. |
| Hardware Specification | No | The paper does not provide specific hardware details (such as GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions optimizers (Adam) and activation functions (tanh(), ReLU()), and a search method (CSLS) but does not provide specific ancillary software details like library names with version numbers (e.g., PyTorch 1.9 or TensorFlow 2.x). |
| Experiment Setup | Yes | We search among the following values for hyper-parameters, i.e., the learning rate in {0.0001, 0.0005, 0.001, 0.005, 0.01}, α1 in {0.1, 0.2, . . . , 0.5}, α2 in {0.01, 0.05, 0.1, 0.2}, λ in {1.0, 1.1, . . . , 2.0}, the hidden representation dimension of each layer in {100, 200, 300, 400, 500}, the number of layers L in {1, 2, 3, 4}, and the number of negative alignment pairs in {5, 10, 15, 20}. The selected setting is that λ = 1.5, α1 = 0.1, α2 = 0.01. The learning rate is 0.001. The batch size for DBP15K is 4, 500, and for DWY100K is 10, 000. We stack two Ali Net layers (L = 2) and each layer combines the one-hop and two-hop information (k = 2). The dimensions of three layers (including the input layer) are 500, 400 and 300, respectively. The activation function for neighborhood aggregation is tanh(), and the one for the gating mechanism is Re LU(). We sample 10 negative samples for each pre-aligned entity pair. We use early stopping to terminate training based on the Hits@1 performance with a patience of 5 epochs. We use CSLS (Conneau et al. 2018) for nearest neighbor search. |