Neural-Symbolic Models for Logical Queries on Knowledge Graphs

Authors: Zhaocheng Zhu, Mikhail Galkin, Zuobai Zhang, Jian Tang

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on 3 datasets show that GNNQE significantly improves over previous state-of-the-art models in answering FOL queries. Meanwhile, GNN-QE can predict the number of answers without explicit supervision, and provide visualizations for intermediate variables.
Researcher Affiliation Academia 1Mila Qu ebec AI Institute 2Universit e de Montr eal 3Mc Gill University 4HEC Montr eal 5CIFAR AI Chair.
Pseudocode Yes Alg. 1 shows the pseudo code for converting expression to postfix notation. Alg. 2 illustrates the steps of batch execution over postfix expressions.
Open Source Code Yes Code is available at https://github.com/DeepGraphLearning/GNN-QE
Open Datasets Yes We evaluate our method on FB15k (Bordes et al., 2013), FB15k-237 (Toutanova & Chen, 2015) and NELL995 (Xiong et al., 2017) knowledge graphs. To make a fair comparison with baselines, we use the standard train, validation and test FOL queries generated by the Beta E paper (Ren & Leskovec, 2020)...
Dataset Splits Yes To make a fair comparison with baselines, we use the standard train, validation and test FOL queries generated by the Beta E paper (Ren & Leskovec, 2020)...
Hardware Specification Yes Our model is trained with Adam optimizer (Kingma & Ba, 2014) on 4 Tesla V100 GPUs.
Software Dependencies No The paper states: 'Our work is implemented based on the open-source codebase of GNNs for KG completion4.' and mentions 'Adam optimizer'. However, it does not provide specific version numbers for software components like Python, PyTorch, or specific libraries used, which are necessary for full reproducibility.
Experiment Setup Yes The neural relation projection model is set to a 4-layer GNN model. We train the model with the self-adversarial negative sampling... Our model is trained with Adam optimizer... Hyperparameters of GNN-QE are given in App. B. (Appendix B provides specific values for GNN #layer, hidden dim., MLP #layer, Traversal Dropout probability, batch size, learning rate, iterations, and adv. temperature).