Neurosymbolic Reasoning and Learning with Restricted Boltzmann Machines

Authors: Son N. Tran, Artur d'Avila Garcez

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the application of our approach empirically on logical reasoning and learning from data and knowledge. Experimental results show that reasoning can be performed effectively for a class of logical formulae. Learning from data and knowledge is also evaluated in comparison with learning of logic programs using neural networks.
Researcher Affiliation Academia Son N. Tran1, Artur d Avila Garcez2 1The University of Tasmania, Launceston, Tasmania, 7248, Australia 2City, University of London, Northampton Square, London, EC1V 0HB, UK
Pseudocode No The paper describes algorithms and processes textually, but does not include any formal pseudocode blocks or algorithms labeled as such.
Open Source Code No The paper mentions using 'off-the-shelf methods from the optimisation library in Sci Py (https://scipy.org/)' which refers to a third-party tool, but it does not provide a link or statement for the open-sourcing of their own methodology's code.
Open Datasets Yes We carry out experiments on 7 data sets with available data and background knowledge (BK): Mutagenesis (examples of molecules tested for mutagenicity and BK provided in the form of rules describing relationships between atom bonds) (Srinivasan et al. 1994), KRK (King-Rook versus King chess endgame with examples provided by the coordinates of the pieces on the board and BK in the form of row and column differences) (Bain and Muggleton 1995), UWCSE (Entity-Relationship diagram with data about students, courses taken, professors, etc. and BK describing the relational structure) (Richardson and Domingos 2006), and the Alzheimer s benchmark: Amine, Acetyl, Memory and Toxic (a set of examples for each of four properties of a drug design for Alzheimer s disease with BK describing bonds between the chemical structures) (King, Sternberg, and Srinivasan 1995).
Dataset Splits Yes The remaining data are used for training and validation based on 10-fold cross validation for each data set, except for UWCSE which uses 5 folds (for the sake of comparison).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. It only mentions general concepts like 'inference with RBMs can be performed in parallel'.
Software Dependencies No The paper mentions using 'off-the-shelf methods from the optimisation library in Sci Py (https://scipy.org/)' but does not specify a version number for SciPy or any other software dependencies used for the experiments.
Experiment Setup No While the paper describes data splitting (e.g., '2.5% of the data is used to build the initial LBM'), it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations (e.g., optimizer settings) for reproducibility.