DeepProbLog: Neural Probabilistic Logic Programming
Authors: Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, Luc De Raedt
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform three sets of experiments to demonstrate that Deep Prob Log supports (i) symbolic and subsymbolic reasoning and learning, that is, both logical reasoning and deep learning; (ii) program induction; and (iii) both probabilistic logic programming and deep learning. We provide implementation details at the end of this section and list all programs in Appendix A. |
| Researcher Affiliation | Academia | Robin Manhaeve KU Leuven robin.manhaeve@cs.kuleuven.be Sebastijan Dumanˇci c KU Leuven sebastijan.dumancic@cs.kuleuven.be Angelika Kimmig Cardiff University Kimmig A@cardiff.ac.uk Thomas Demeester Ghent University imec thomas.demeester@ugent.be Luc De Raedt* KU Leuven luc.deraedt@cs.kuleuven.be |
| Pseudocode | No | The paper describes processes and concepts in prose and diagrams but does not include any explicit pseudocode blocks or algorithms. |
| Open Source Code | Yes | The code is available at https://bitbucket.org/problog/deepproblog. |
| Open Datasets | Yes | We extend the classic learning task on the MNIST dataset (Lecun et al. [1998]) to two more complex problems that require reasoning. |
| Dataset Splits | No | The paper mentions "Training on a set of 256 instances converges after 5 epochs, leading to 100% accuracy on the test set (64 instances)" for the coin-ball problem, but does not provide general or explicit training/validation/test dataset splits for all experiments, nor does it explicitly mention a validation set. |
| Hardware Specification | No | The paper mentions experiments run "on GPU" and "on CPU" but does not provide specific hardware details such as GPU model numbers, CPU types, or memory specifications. |
| Software Dependencies | No | For the implementation we integrated Prob Log2 [Dries et al., 2015] with Py Torch [Paszke et al., 2017]. Specific version numbers for PyTorch or Prob Log2 were not provided. |
| Experiment Setup | Yes | The network used to classify MNIST images is a basic architecture based on the Py Torch tutorial. It consists of 2 convolutional layers with kernel size 5, and respectively 6 and 16 filters, each followed by a maxpool layer of size 2, stride 2. After this come 3 fully connected layers of sizes 120, 84 and 10 (19 for the CNN baseline). ... For all experiments we use Adam [Kingma and Ba, 2015] optimization for the neural networks, and SGD for the logic parameters. The learning rate is 0.001 for the MNIST network, and 1 for the colour network. For robustness in optimization, we use a warm-up of the learning rate of the logic parameters for the coin-ball experiments, starting at 0.0001 and raising it linearly to 0.01 over four epochs. |