LLM-based Multi-Level Knowledge Generation for Few-shot Knowledge Graph Completion
Authors: Qian Li, Zhuo Chen, Cheng Ji, Shiqi Jiang, Jianxin Li
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Most notably, our method achieves SOTA results in both FKGC and multi-modal FKGC benchmarks, significantly advancing KG completion and enhancing the understanding and application of LLMs in structured knowledge generation and assessment. 5 Experiments |
| Researcher Affiliation | Collaboration | 1State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China 2School of Computer Science and Engineering, Beihang University, Beijing, China 3College of Computer Science and Technology, Zhejiang University, Zhejiang, China 4Zhongguancun Lab, Beijing, China |
| Pseudocode | No | The paper describes its methods textually but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code is available at https://github.com/xiaoqian19940510/Mu KDC. |
| Open Datasets | Yes | We employ two public benchmark datasets for FKGC: NELL and Wiki [Mitchell et al., 2018; Vrandeˇci c and Krötzsch, 2014]. |
| Dataset Splits | Yes | We divide NELL into 51/5/11 and Wiki into 133/16/34 relations for training, validation, and testing, respectively. The splits for training, validation, and testing are allocated as 267/18/71 task relations for MM-FB15K and 51/6/12 for MM-DBpedia, following the 15:1:4 ratio, in line with prior studies [Zhang et al., 2022]. |
| Hardware Specification | Yes | Our FKGC is implemented using Py Torch and trained on a Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'LLava model' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | No | The paper states 'The threshold for the Trans E [Bordes et al., 2013] model during the Consistency Assessment process is set to 1.0' and 'All other experimental settings not mentioned here, including the training procedures for FKGC, are kept consistent with those reported in [Zhang et al., 2022]', but it does not explicitly list concrete hyperparameters or a detailed experimental setup within its text. |