Efficient Probabilistic Logic Reasoning with Graph Neural Networks
Authors: Yuyu Zhang, Xinshi Chen, Yuan Yang, Arun Ramamurthy, Bo Li, Yuan Qi, Le Song
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments on several benchmark datasets demonstrate that Express GNN leads to effective and efficient probabilistic logic reasoning. |
| Researcher Affiliation | Collaboration | 1Georgia Institute of Technology 2Siemens Corporate Technology 3University of Illinois at Urbana Champaign 4Ant Financial |
| Pseudocode | Yes | Algorithm 1: GNN() |
| Open Source Code | Yes | The full list of logic formulae is available in our source code repository. |
| Open Datasets | Yes | We evaluate Express GNN and other baseline methods on four benchmark datasets: UW-CSE (Richardson & Domingos, 2006), Cora (Singla & Domingos, 2005), synthetic Kinship datasets, and FB15K-237 (Toutanova & Chen, 2015) constructed from Freebase (Bollacker et al., 2008). |
| Dataset Splits | Yes | For Kinship, UW-CSE and Cora, we run Express GNN with a fixed number of iterations, and use the smallest subset from the original split for hyperparameter tuning. For FB15K-237, we use the original validation set to tune the hyperparameters. ... The dataset is split into training / validation / testing and we use the same split of facts from training as in prior work (Yang et al., 2017). |
| Hardware Specification | Yes | We conduct all the experiments on a GPU-enabled (Nvidia RTX 2080 Ti) Linux machine powered by Intel Xeon Silver 4116 processors at 2.10GHz with 256GB RAM. |
| Software Dependencies | No | We implement Express GNN using Py Torch and train it with Adam optimizer (Kingma & Ba, 2014). |
| Experiment Setup | Yes | For Express GNN, we use 0.0005 as the initial learning rate, and decay it by half for every 10 epochs without improvement of validation loss. ... We use a two-layer MLP with Re LU activation function as the nonlinear transformation for each embedding update step in the GNN model. ... The configuration we search includes the embedding size, the split point of tunable embeddings and GNN embeddings, the number of embedding update steps, and the sampling batch size. |