Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks

Authors: Prithviraj Sen, Breno W. S. R. de Carvalho, Ryan Riegel, Alexander Gray8212-8219

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable and can achieve comparable or higher accuracy due to their flexible parameterization. We experiment with diverse benchmarks for ILP including gridworld and knowledge base completion (KBC) that call for learning of different kinds of rules and show how our approach can tackle both effectively.
Researcher Affiliation Industry Prithviraj Sen, Breno W. S. R. de Carvalho, Ryan Riegel, Alexander Gray IBM Research
Pseudocode No No structured pseudocode or algorithm blocks were found.
Open Source Code No The paper provides a link 'github.com/shehzaadzd/MINERVA' for datasets used, but does not explicitly state that the source code for their proposed methodology is available.
Open Datasets Yes We experiment with publicly available KBC datasets Kinship, UMLS (Kok and Domingos 2007), WN18RR (Dettmers et al. 2018), and FB15K-237 (Toutanova and Chen 2015) 2 (see Table 1 for statistics). All available at github.com/shehzaadzd/MINERVA
Dataset Splits No The paper mentions evaluating on 'test set triples' and discusses 'filtered ranks' but does not specify the explicit train/validation/test dataset splits (e.g., percentages or counts) or reference predefined splits for reproducibility.
Hardware Specification No No specific hardware details such as GPU/CPU models, memory, or detailed computer specifications used for running experiments were provided.
Software Dependencies No The paper mentions functions like 'relu1(x)' and 'maxout' and cites their origins, but does not provide specific software library names with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We provide additional details including the training algorithm used and hyperparameter tuning in Appendix A.