Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors

Authors: Hang Yin, Zihao Wang, Yangqiu Song

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 7 EXPERIMENT
Researcher Affiliation Academia Hang Yin Department of Mathematical Sciences Tsinghua University h-yin20@mails.tsinghua.edu.cn Zihao Wang Department of CSE HKUST zwanggc@cse.ust.hk Yangqiu Song Department of CSE HKUST yqsong@cse.ust.hk
Pseudocode Yes Algorithm 1: FIT algorithm on any EFO1 formulas, where FITC is FIT computed on a query graph, explained in Algorithm2. Algorithm 2: FIT on a conjunctive query, which is represented by a query graph. We name it FITC for short.
Open Source Code Yes Our code and data can be found at https://github.com/HKUST-KnowComp/FIT.
Open Datasets Yes We evaluate our algorithm on various tasks. Firstly, we evaluate our algorithm on our new dataset of real EFO1 queries developed in Section 6... Secondly, we compare our algorithm with existing methods on the dataset of Tree-Form queries provided by Ren & Leskovec (2020)... Knowledge Graph Method pni 2il 3il 2m 2nm 3mp 3pm im 3c 3cm FB15k-237... FB15k... NELL...
Dataset Splits Yes The learning rate is set to 0.0001, the batch size is set to 64, the maximum training step is set to 5,000 steps and we choose the best checkpoints by the scores in the validation set.
Hardware Specification No No specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) were mentioned for running experiments.
Software Dependencies No The paper mentions 'Pytorch' in Appendix F, but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The learning rate is set to 0.0001, the batch size is set to 64, the maximum training step is set to 5,000 steps and we choose the best checkpoints by the scores in the validation set.