Fuzzy Logic Based Logical Query Answering on Knowledge Graphs

Authors: Xuelu Chen, Ziniu Hu, Yizhou Sun3939-3948

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two benchmark datasets demonstrate that Fuzz QE provides significantly better performance in answering FOL queries compared to state-of-the-art methods.
Researcher Affiliation Academia Xuelu Chen, Ziniu Hu, Yizhou Sun Department of Computer Science, University of California, Los Angeles {shirleychen, bull, yzsun}@ucla.edu
Pseudocode No The paper does not contain any explicit sections or figures labeled "Pseudocode" or "Algorithm".
Open Source Code No For GQE, Query2Box, and Beta E we use implementation provided by (Ren and Leskovec 2020) 2. 2https://github.com/snap-stanford/KGReasoning - This link is for baseline models, not the authors' Fuzz QE code. There is no other mention of code release for Fuzz QE.
Open Datasets Yes We evaluate our model on two benchmark datasets provided by (Ren and Leskovec 2020), which contain 14 types of logical queries on FB15k-237 (Toutanova and Chen 2015) and NELL995 (Xiong, Hoang, and Wang 2017) respectively.
Dataset Splits Yes The validation/test set of the original 9 query types are regenerated to ensure that the number of answers per query is not excessive, making this task more challenging. In the new datasets, 10 query structures are used for both training and evaluation: 1p, 2p, 3p, 2i, 3i, 2in, 3in, inp, pni, pin. 4 query structures (ip, pi, 2u, up) are not used for training but only included in evaluation...
Hardware Specification Yes On a NVIDIA GP102 TITAN Xp (12GB), the average time for CQD to answer a FOL query on FB15k-237 is 13.9 ms (milliseconds), while Fuzz QE takes only 0.3 ms.
Software Dependencies No The paper mentions "we use Adam W (Loshchilov and Hutter 2019) as the optimizer" but does not specify version numbers for other software dependencies like programming languages or deep learning frameworks.
Experiment Setup Yes we use Adam W (Loshchilov and Hutter 2019) as the optimizer. Training terminates with early stopping based on the average MRR on the validation set with a patience of 15k steps.