Learning Multi-Granularity and Adaptive Representation for Knowledge Graph Reasoning

Authors: Ziyu Shang, Peng Wang, Wenjun Ke, Jiajun Liu, Hailang Huang, Guozheng Li, Chenxiao Wu, Jianghan Liu, Xiye Chen, Yining Li

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The systematic experiments have been conducted on 15 benchmarks and Mul GA achieves a significant improvement in MRR by an average of 1.5% on transductive and 2.7% on inductive tasks than existing state-of-the-art methods. 4 Experiments
Researcher Affiliation Academia 1School of Computer Science and Engineering, Southeast University 2Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education 3SKLSDE, School of Computer Science and Engineering, Beihang University 4College of Software Engineering, Southeast University 5College of Information Engineering, Nanjing University of Finance & Economics
Pseudocode No No structured pseudocode or algorithm blocks are present in the paper.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes For transductive setting, WN18RR [Dettmers et al., 2018], FB15K-237 [Toutanova et al., 2015], and NELL-995 [Das et al., 2018] are used to evaluate the performance of Mul GA. For inductive setting, we use the inductive benchmark datasets provided in Gra IL [Teru et al., 2020].
Dataset Splits Yes Figure 3: Validation MRR w.r.t. training time for competitive models during training on FB15K-237 (left) and FB15K-237 (V1) (right). Table 5: The results of MAD on NELL-995 (V1), where the range of values is in (0, 2]. A smaller value represents a higher degree of smoothness and a greater tendency to cause over-smoothing. Methods WN18RR (V1) NELL-995 (V1) Validation Test Validation Test
Hardware Specification Yes Table 3: Efficiency of competitive models in an A100 GPU (80GB), which involves three aspects: wall time per epoch (WT), inference time (IT), and the number of free parameters (P).
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the experiments.
Experiment Setup No The paper describes the optimization function and evaluation metrics but does not provide specific hyperparameter values (e.g., learning rate, batch size, optimizer settings) or detailed training configurations.