Neural-Symbolic Entangled Framework for Complex Query Answering

Authors: Zezhong Xu, Wen Zhang, Peng Ye, Hui Chen, Huajun Chen

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results prove that our model outperforms existing complex query answering methods over standard knowledge graphs: FB15K-237 [18] and NELL-995 [22].
Researcher Affiliation Collaboration Zezhong Xu1 , Wen Zhang1 , Peng Ye1, Hui Chen3, Huajun Chen1,2 1Zhejiang University & AZFT Joint Lab for Knowledge Engine, China 2Hangzhou Innovation Center, Zhejiang University, 3Alibaba Group
Pseudocode No The paper describes its methodology using equations and textual descriptions but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Source code of ENe Sy is available at https://github.com/zjukg/ENe Sy.
Open Datasets Yes We perform the experiments on two benchmarks, FB15K-237 [18] and NELL-995 [22].
Dataset Splits Yes Specifically, with the standard training, validation, and testing set, the edges in KG could be divided into three parts, which are training edges, validation edges, and test edges.
Hardware Specification Yes We implement our model with Pytorch framework and train our model on RTX3090 GPU.
Software Dependencies No The paper mentions implementing the model with Pytorch but does not specify its version number or any other software dependencies with their respective versions.
Experiment Setup Yes The ADAM optimizer was used to parameter tune with a learning rate of 0.0001 that will decrease during the training process. We set the embedding dimension of the entity and relation to 1024, respectively. The hidden state dimension of MLP is 1024. The training batch size is {64, 16} for FB15K-237 and NELL-995, while the negative sample size is {128, 32}. The margin γ used in similarity computation is 24. θ used as a threshold is 10 10. α is set to be 10.