R5: Rule Discovery with Reinforced and Recurrent Relational Reasoning

Authors: Shengyao Lu, Bang Liu, Keith G. Mills, SHANGLING JUI, Di Niu

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive evaluations on multiple datasets. Experimental results show that R5 outperforms various embedding-based and rule induction baselines on relation prediction tasks while achieving a high recall rate in discovering ground truth rules. We conduct extensive evaluations based on two public relation prediction datasets, CLUTRR (Sinha et al., 2019) and Graph Log (Sinha et al., 2020), and compare R5 with a variety of baseline methods. The experimental results demonstrate that our approach significantly outperforms state-of-the-art methods in terms of relation prediction accuracy and recall rate in rule discovery.
Researcher Affiliation Collaboration Shengyao Lu1 , Bang Liu2,3 , Keith G. Mills1, Shangling Jui4, Di Niu1 1Department of Electrical and Computer Engineering, University of Alberta 2RALI & Mila, Universit e de Montr eal, 3Canada CIFAR AI Chair 4Huawei Kirin Solution {shengyao,kgmills,dniu}@ualberta.ca bang.liu@umontreal.ca,jui.shangling@huawei.com
Pseudocode Yes Algorithm 1 Backtrack Rewriting
Open Source Code Yes The implementation is available at https://github.com/sluxsr/r5 graph reasoning.
Open Datasets Yes We evaluate R5 on two datasets: CLUTRR (Sinha et al., 2019) and Graph Log (Sinha et al., 2020), which test the logical generalization capabilities.
Dataset Splits No The train set contains graphs with up to 3 or 4 edges, and the test set contains graphs with up to 10 edges. Tables 7 and 8 provide '#Train' and '#Test' counts. There is no explicit mention of a 'validation' split.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments, such as exact GPU/CPU models or processor types.
Software Dependencies No The paper does not provide specific software dependency details with version numbers, such as programming languages, libraries, or frameworks used for implementation.
Experiment Setup Yes We use the following hyperparameters for all the datasets: epochs = 10, learning rate = 0.01, n = 50, ϵ = 0.003, v0 = 0.6, v1 = 0.3, v2 = -0.05, v3 = -0.1, v4 = -0.3, v T neg = -1, σ = -1.2, and v T pos = 0.1 or 0.35 or 0.8 depending on the dataset we investigate.