RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs
Authors: Meng Qu, Junkun Chen, Louis-Pascal Xhonneux, Yoshua Bengio, Jian Tang
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on four datasets prove the effectiveness of RNNLogic. |
| Researcher Affiliation | Academia | 1Mila Qu ebec AI Institute 2Universit e de Montr eal 3Tsinghua University 4HEC Montr eal 5Canadian Institute for Advanced Research (CIFAR) |
| Pseudocode | Yes | Algorithm 1 Workflow of RNNLogic |
| Open Source Code | Yes | The codes of RNNLogic are available: https://github.com/Deep Graph Learning/RNNLogic |
| Open Datasets | Yes | We choose four datasets for evaluation, including FB15k-237 (Toutanova & Chen, 2015), WN18RR (Dettmers et al., 2018), Kinship and UMLS (Kok & Domingos, 2007). |
| Dataset Splits | Yes | For Kinship and UMLS, there are no standard data splits, so we randomly sample 30% of all the triplets for training, 20% for validation, and the rest 50% for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory, or specific cloud computing instance types used for experiments. |
| Software Dependencies | No | The paper mentions software components like 'LSTM' and 'Adam optimizer' but does not provide specific version numbers for any libraries, frameworks, or programming languages. |
| Experiment Setup | Yes | For the rule generator, the maximum length of generated rules is set to 4 for FB15k-237, 5 for WN18RR, and 3 for the rest... The size of input and hidden states in RNNθ are set to 512 and 256. The learning rate is set to 1 10 3 and monotonically decreased in a cosine shape. |