Towards Continual Knowledge Graph Embedding via Incremental Distillation
Authors: Jiajun Liu, Wenjun Ke, Peng Wang, Ziyu Shang, Jinhua Gao, Guozheng Li, Ke Ji, Yanhe Liu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate the superiority of Inc DE over state-of-the-art baselines. Notably, the incremental distillation mechanism contributes to improvements of 0.2%-6.5% in the mean reciprocal rank (MRR) score. More exploratory experiments validate the effectiveness of Inc DE in proficiently learning new knowledge while preserving old knowledge across all time steps. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Southeast University 2Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China 3Institute of Computing Technology, Chinese Academy of Sciences |
| Pseudocode | No | The paper describes its methods in text but does not include structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | No | The paper states 'The datasets are available at https://github.com/seukgcode/Inc DE.' but does not explicitly provide a link or statement confirming the availability of the source code for the Inc DE methodology itself. |
| Open Datasets | Yes | We use seven datasets for CKGE, including four public datasets (Cui et al. 2023): ENTITY, RELATION, FACT, HYBRID, as well as three new datasets constructed by us: Graph Equal, Graph Higher, and Graph Lower. ... The datasets are available at https://github.com/seukgcode/Inc DE. |
| Dataset Splits | Yes | The train, valid, and test sets are allocated 3:1:1 for each time step. |
| Hardware Specification | Yes | All experiments are implemented on the NVIDIA RTX 3090Ti GPU with the Py Torch (Paszke et al. 2019). |
| Software Dependencies | No | The paper mentions 'Py Torch (Paszke et al. 2019)' as software used but does not provide a specific version number for it or other software dependencies. |
| Experiment Setup | Yes | The embedding size for entities and relations is 200. We tune the batch size in [512, 1024, 2048]. We choose Adam as the optimizer and set the learning rate from [1e-5, 1e-4, 1e3]. In our experiments, we set the max number of triples in each layer M in [512, 1024, 2048]. |