DeepStochLog: Neural Stochastic Logic Programming
Authors: Thomas Winters, Giuseppe Marra, Robin Manhaeve, Luc De Raedt10090-10100
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The goal of our experiments is to answer the following questions: Q1 Does Deep Stoch Log reach state-of-the-art predictive performance on neural-symbolic tasks? Q2 How does the inference time of Deep Stoch Log compare to other neural-symbolic frameworks and what is the role of tabling? Q3 Can Deep Stoch Log handle larger-scale tasks? Q4 Can Deep Stoch Log go beyond grammars and encode more general programs? |
| Researcher Affiliation | Academia | 1Department of Computer Science; Leuven.AI, KU Leuven, Belgium, 2AASS, Orebro University, Sweden |
| Pseudocode | No | The paper provides code listings (Listing 1-7) demonstrating Deep Stoch Log programs and their translation to Prolog, but it does not contain pseudocode or a clearly labeled generic algorithm block. |
| Open Source Code | Yes | We released Deep Stoch Log as an installable Python package and published all code and data used evaluation tasks on https://github.com/ML-KULeuven/deepstochlog. |
| Open Datasets | Yes | For tasks T1, T3 and T4, we used the MNIST dataset from (Lecun et al. 1998) to generate new datasets. The MNIST dataset itself was released under the Creative Commons Attribution-Share Alike 3.0 license. We distribute our new datasets built on top of MNIST and the corresponding generating code under the Apache License 2.0 on https://github.com/ML-KULeuven/deepstochlog/releases/tag/0.0.1. The Handwritten Formula Recognition (HWF) dataset (used in T2) originates from (Li et al. 2020). The Cora and Citeseer datasets (T5) are form Sen et al. (2008). The dataset of the Word Algebra Problem (T6) originates from (Roy and Roth 2015). |
| Dataset Splits | Yes | We trained for 100 epochs and selected the model corresponding to the epoch with maximum accuracy on the validation set. |
| Hardware Specification | Yes | Inference time experiments are all executed on a Mac Book Pro 13 2020 (2.3 GHz Quad-Core Intel Core i7 and 16 GB 3733 MHz LPDDR4). |
| Software Dependencies | No | The paper states that "Deep Stoch Log as an installable Python package" was released, but it does not specify version numbers for Python or any other key software dependencies like deep learning frameworks (e.g., PyTorch, TensorFlow) or specific libraries. |
| Experiment Setup | Yes | For the MNIST addition problem, we trained the model for 25 epochs using the Adam optimizer with a learning rate of 0.001 and used 32 training terms in each batch for each digit length. |