Soft-Unification in Deep Probabilistic Logic

Authors: Jaron Maene, Luc De Raedt

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that Deep Soft Log can outperform the state-of-the-art on neuro-symbolic benchmarks, highlighting the benefits of these properties.
Researcher Affiliation Academia Jaron Maene KU Leuven jaron.maene@kuleuven.be Luc De Raedt KU Leuven & Örebro University luc.deraedt@kuleuven.be
Pseudocode Yes Algorithm 1 Soft-unification
Open Source Code Yes All code is available on https://github.com/jjcmoon/DeepSoftLog.
Open Datasets Yes For the full experimental setup, we refer to previous work [26, 29]."; "We use the same experimental setup and neural network as previous works, to which we refer for more details [22]."; "As an example, consider the regular language (01) represented by MNIST images.
Dataset Splits Yes For the full experimental setup, we refer to previous work [26, 29]."; "We use the same experimental setup and neural network as previous works, to which we refer for more details [22]."; "We train on sequences of lengths up to 4 and test the generalization by evaluating on sequences of length 8 with images not seen during training."; "We averaged the results over the other 10 splits.
Hardware Specification Yes We ran the experiments on CPU (Intel Core i7-2600 CPU @ 3.40GHz) with 16GB of RAM."; "We ran all experiments on a single CPU (Apple M2)."; "We ran all experiments on CPU (Intel(R) Xeon(R) CPU E3-1225 v3 @ 3.20GHz).
Software Dependencies No The paper mentions using the "gradient semiring of algebraic Prob Log" and the "Adam W optimizer" but does not provide specific software dependencies with version numbers.
Experiment Setup Yes Hyperparameters are summarized in table 5."; "Hyperparameters are summarized in table 6."; "Hyperparameters are summarized in table 7."; "Hyperparameters for the visual sudoku experiment are in table 9.