Learning to Walk with Dual Agents for Knowledge Graph Reasoning

Authors: Denghui Zhang, Zixuan Yuan, Hao Liu, Xiaodong lin, Hui Xiong5932-5941

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experimental studies on three real-world KG datasets. The results demonstrate that our approach can search answers more accurately and efficiently than existing embedding-based approaches as well as traditional RL-based methods, and outperforms them on long path queries significantly.
Researcher Affiliation Academia 1Rutgers University 2Hong Kong University of Science and Technology (HKUST) {denghui.zhang, zy101, lin, hxiong}@rutgers.edu, liuh@ust.hk
Pseudocode Yes Algorithm 1: CURL Training Algorithm
Open Source Code Yes Source code: https://github.com/RutgersDM/DKGR/tree/master
Open Datasets Yes We evaluate the effectiveness of CURL4 by performing two fundamental tasks using three real-world KG datasets, i.e., FB15K-237, WN18RR, and NELL-995. The WN18RR (Dettmers et al. 2018) and FB15K-237 (Toutanova et al. 2015) datasets are separately created from the original WN18 and FB15K datasets by removing various sources of test leakage... The NELL-995 dataset released by (Xiong, Hoang, and Wang 2017) contains separate graphs for each query relation.
Dataset Splits No The paper mentions training and testing sets for some experiments and implies hyperparameter tuning, but does not explicitly state the details of a validation dataset split (e.g., percentages or counts for validation).
Hardware Specification Yes CURL and all baselines are implemented with Pytorch framework (Paszke et al. 2019) and run on a single 2080 Ti GPU.
Software Dependencies No The paper mentions 'Pytorch framework (Paszke et al. 2019)' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes A.2. Optimal Hyperparameters To reproduce the results of our model in Table 1 and Table 2, we report the empirically optimal hyperparameters. Specifically, we set the entity embedding dimension to 50 and relation embedding dimension to 50. ... We use the Adam optimization (Kingma and Ba 2014) in REINFORCE for model training with learning rate as 0.001, and the best mini-batch size is 128.