Lifelong Embedding Learning and Transfer for Growing Knowledge Graphs
Authors: Yuanning Cui, Yuxin Wang, Zequn Sun, Wenqiang Liu, Yiqiao Jiang, Kexin Han, Wei Hu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments regarding link prediction accuracy, knowledge transfer capability, and learning efficiency to validate the proposed model, LKGE. The datasets and source code are available at https://github.com/nju-websoft/LKGE. |
| Researcher Affiliation | Collaboration | Yuanning Cui1, Yuxin Wang1, Zequn Sun1, Wenqiang Liu3, Yiqiao Jiang3, Kexin Han3, Wei Hu1,2* 1 State Key Laboratory for Novel Software Technology, Nanjing University, China 2 National Institute of Healthcare Data Science, Nanjing University, China 3 Interactive Entertainment Group, Tencent Inc, China |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The datasets and source code are available at https://github.com/nju-websoft/LKGE. |
| Open Datasets | Yes | To simulate a variety of aspects of KG growth, we create four datasets based on FB15K-237 (Toutanova and Chen 2015), which are entity-centric, relation-centric, fact-centric, and hybrid. We denote them by ENTITY, RELATION, FACT and HYBRID, respectively. ... The datasets and source code are available at https://github.com/nju-websoft/LKGE. |
| Dataset Splits | Yes | For each snapshot, we randomly divide the new fact set T i into a training set Di, a validation set Vi and a test set Qi by a split ratio of 3:1:1. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments (e.g., CPU, GPU models, memory). |
| Software Dependencies | No | The paper mentions 'Adam optimizer' and 'Trans E' but does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For a fair comparison, we first tune the hyperparameters of the base model using grid-search: learning rate in {0.0005, 0.0001, 0.001}, batch size in {1024, 2048}, embedding dimension in {100, 200}. Then, we use the same base model for LKGE and all competitors, and tune other hyperparameters. For the regularization models, the α of regularization loss is in {0.01, 0.1, 1.0}. For our model, the β of MAE loss is in {0.01, 0.1, 1.0}. For all competitors, we use Adam optimizer and set the patience of early stopping to 3. |