Logical Entity Representation in Knowledge-Graphs for Differentiable Rule Learning

Authors: Chi Han, Qizheng He, Charles Yu, Xinya Du, Hanghang Tong, Heng Ji

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results demonstrate that with LERP, our model outperforms other rule learning methods in knowledge graph completion and is comparable or even superior to state-of-the-art black-box methods. 5 EXPERIMENTS
Researcher Affiliation Academia 1University of Illinois Urbana-Champaign, 2The University of Texas at Dallas
Pseudocode No The paper does not contain any clearly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format.
Open Source Code Yes All code and data are publicly available at https://github.com/Glaciohound/LERP.
Open Datasets Yes We follow previous works (Yang et al., 2017; Sadeghian et al., 2019) and evaluate on the Unified Medical Language System (UMLS), Kinship, and Family datasets (Kok & Domingos, 2007) as well as the WN18 (Bordes et al., 2013) and WN18RR (Dettmers et al., 2018) datasets.
Dataset Splits Yes For Kinship and UMLS, we follow the data split from Qu et al. (2020), and for Family, we follow the split used by Sadeghian et al. (2019). We search for hyper-parameters according to validation set performance
Hardware Specification No The paper mentions 'Our 16GB CUDA memory space' in Appendix F, but does not provide specific hardware details such as GPU or CPU models used for the experiments.
Software Dependencies No For optimization, we use Adam with a learning rate of 0.1, β1 = 0.9, and β2 = 0.999.
Experiment Setup Yes We search for hyper-parameters according to validation set performance, and finally adopt the hyperparameters dimension m = 80 and depth T = 2 for LERP, and the maximum rule length of K = 3. We use 4 rules for WN18RR and WN18 and Family and 200 rules per target relation type for Kinship and UMLS dataset. Our model is trained for 4 epochs for WN18RR and WN18 and 10 epochs for Family, Kinship, and UMLS. For optimization, we use Adam with a learning rate of 0.1, β1 = 0.9, and β2 = 0.999.