ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs

Authors: Zhanqiu Zhang, Jie Wang, Jiajun Chen, Shuiwang Ji, Feng Wu

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that Con E significantly outperforms existing state-of-the-art methods on benchmark datasets.
Researcher Affiliation Academia Zhanqiu Zhang1,2 Jie Wang1,2 Jiajun Chen1,2 Shuiwang Ji3 Feng Wu1,2 1CAS Key Laboratory of Technology in GIPAS University of Science and Technology of China 2Institute of Artificial Intelligence Hefei Comprehensive National Science Center {zqzhang,jjchen}@mail.ustc.edu.cn,{jiewangx,fengwu}@ustc.edu.cn 3Texas A&M University sji@tamu.edu
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code of Con E is available on Git Hub at https://github.com/MIRALab-USTC/QE-Con E.
Open Datasets Yes We use three datasets: FB15k [2], FB15k-237 (FB237) [27], and NELL995 (NELL) [30].
Dataset Splits Yes We first build three KGs: the training KG Gtrain, the validation KG Gvalid, and the test KG Gtest using training edges, training+validation edges, training+validation+test edges, respectively. Given a test (validation) query q, we aim to discover non-trivial answers Jq Ktest\Jq Kvalid (Jq Kvalid\Jq Ktrain).
Hardware Specification No The paper does not specify any particular hardware components (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using 'Adam [13] as the optimizer', but it does not provide specific version numbers for any software libraries, frameworks, or programming languages used (e.g., PyTorch version, Python version).
Experiment Setup Yes We use Adam [13] as the optimizer, and use grid search to find the best hyperparameters based on the performance on the validation datasets. For the search range and best hyperparameters, please refer to Appendix B.2.