Probabilistic Logic Neural Networks for Reasoning

Authors: Meng Qu, Jian Tang

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on multiple knowledge graphs prove the effectiveness of p Logic Net over many competitive baselines.
Researcher Affiliation Academia Meng Qu1,2, Jian Tang1,3,4 1Mila Quebec AI Institute 2University of Montréal 3HEC Montréal 4CIFAR AI Research Chair
Pseudocode No The paper describes the variational EM algorithm and its E-step and M-step procedures in detail, but it does not present them in a structured pseudocode block or a clearly labeled 'Algorithm' section.
Open Source Code No The paper does not provide an explicit statement about the release of its source code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes In experiments, we evaluate the p Logic Net on four benchmark datasets. The FB15k [3] and FB15k-237 [43] datasets are constructed from Freebase [2]. WN18 [3] and WN18RR [8] are constructed from Word Net [24]. The detailed statistics of the datasets are summarized in appendix.
Dataset Splits No The paper mentions observed and hidden triplets, and uses standard benchmark datasets, but it does not explicitly provide specific train/validation/test split percentages, sample counts for each split, or detail a cross-validation setup.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No The paper mentions using Trans E [3] as the default knowledge graph embedding model, but it does not list any specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, or specific library versions).
Experiment Setup Yes To generate the candidate rules in the p Logic Net, we search for all the possible composition rules, inverse rules, symmetric rules and subrelations rules from the observed triplets, which is similar to [10, 15]. Then, we compute the empirical precision of each rule, i.e. pl = |Sl O| / |Sl| , where Sl is the set of triplets extracted by the rule l and O is the set of the observed triplets. We only keep rules whose empirical precision is larger than a threshold τrule. Trans E [3] is used as the default knowledge graph embedding model to parameterize qθ. We update the weights of logic rules with gradient descent. The detailed hyperparameters settings are available in the appendix.