Answering Complex Logical Queries on Knowledge Graphs via Query Computation Tree Optimization

Authors: Yushi Bai, Xin Lv, Juanzi Li, Lei Hou

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on 3 datasets show that QTO obtains state-of-the-art performance on complex query answering, outperforming previous best results by an average of 22%.
Researcher Affiliation Academia 1Department of Computer Science and Technology, BNRist; KIRC, Institute for Artificial Intelligence; Tsinghua University, Beijing 100084, China. Correspondence to: Lei Hou <houlei@tsinghua.edu.cn>.
Pseudocode Yes Algorithm 1 Forward Propagation Function Algorithm 2 Backward Propagation Function Algorithm 3 Query Computation Tree Optimization
Open Source Code Yes The code of our paper is at https://github.com/bys0318/QTO.
Open Datasets Yes We experiment on three knowledge graph datasets, including FB15k (Bordes et al., 2013), FB15k237 (Toutanova & Chen, 2015), NELL995 (Xiong et al., 2017).
Dataset Splits Yes Specifically, in valid/test set, the easy answers are the entities that can be inferred by edges in training/valid graph, while hard answers are those that can be inferred by predicting missing edges in valid/test graph. Table 6 summarizes the statistics of the three datasets in our experiments. Dataset #Entities #Relations #Training edges #Valid edges #Test edges
Hardware Specification Yes Table 5. Inference time (ms/query) on each type of query on FB15k-237, evaluated on one RTX 3090 GPU.
Software Dependencies No The paper mentions software components like 'Compl Ex', 'N3 regularizor', and points to a KGE implementation ('https://github.com/facebookresearch/ssl-relation-prediction'), but it does not provide specific version numbers for these software dependencies (e.g., PyTorch version, specific library versions).
Experiment Setup Yes We provide the best hyperparameters of the pretrained KGE 5 and QTO in Table 7. The hyperparameters for KGE, which is a Compl Ex model (Trouillon et al., 2016) trained with N3 regularizor (Lacroix et al., 2018) and auxiliary relation prediction task (Chen et al., 2021), include embedding dimension d, learning rate lr, batch size b, regularization strength λ, auxiliary relation prediction weight w, and the number of epochs. We recall that the hyperparameters in our QTO method include the threshold ϵ and the negation scaling coefficient α.