Logic Attention Based Neighborhood Aggregation for Inductive Knowledge Graph Embedding
Authors: Peifeng Wang, Jialong Han, Chenliang Li, Rong Pan7152-7159
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the effectiveness of our LAN model on two typical knowledge graph completion tasks, i.e., link prediction and triplet classification. We compare our LAN with two baseline aggregators, MEAN and LSTM, as described in the Encoder section. |
| Researcher Affiliation | Collaboration | 1Sun Yat-sen University, China, 2Tencent AI Lab, China, 3Wuhan University, China |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | No explicit statement about the release of their own source code or a link to a code repository was found. |
| Open Datasets | Yes | we directly use the datasets released by Hamaguchi et al. (2017) which are based on Word Net11 (Socher et al. 2013). Since they do not conduct experiments on the link prediction task, we construct the required datasets based on FB15K (Bordes et al. 2013) following a similar protocol used in Hamaguchi et al. (2017) as follows. |
| Dataset Splits | Yes | The second step is to ensure that unseen entities would not appear in final training set or validation set. We split the original training set into two data sets, the new training set and auxiliary set. For a triplet (s, r, o) in original training set, if s, o E, it is added to the new training set. If s U o E or s E o U, it is added to the auxiliary set, which serves as existing neighbors for unseen entities in T. Finally, for a triplet (s, r, o) in the original validation set, if s U or o U, it is removed from the validation set. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., library names with versions) were mentioned in the paper. |
| Experiment Setup | Yes | Experimental Setup Since this task is also conducted in Hamaguchi et al. (2017), we use the same configurations with learning rate α = 0.001, embedding dimension d = 100, and margin γ = 300.0 for all datasets. We randomly sample 64 neighbors for each entity. Zero padding is used when the number of neighbors is less than 64. L2-regularization is applied on the parameters of LAN. The regularization rate is 0.001. and Experimental Setup We search the best hyper-parameters of all models according to the performance on validation set. In detail, we search learning rate α in {0.001, 0.005, 0.01, 0.1}, embedding dimension for neighbors d in {20, 50, 100, 200}, and margin γ in {0.5, 1.0, 2.0, 4.0}. The optimal configurations are α = 0.001, d = 100, γ = 1.0 for all the datasets. |