On the Hardness of Probabilistic Neurosymbolic Learning

Authors: Jaron Maene, Vincent Derkinderen, Luc De Raedt

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, Sections 6 and 7 provide a comprehensive overview and evaluation of existing approximation techniques. Our results indicate that they have difficulties optimizing benchmarks that can still easily be solved exactly. This suggests that principled methods are warranted if we want to apply probabilistic neurosymbolic optimization to more complex reasoning tasks. and 7. Experiments It is clear that many methods exist to approximate WMC gradients, so the question arises as to which of the methods are appropriate in practice. For this reason, we evaluate the gradients of the various methods on a set of challenging WMC benchmarks.
Researcher Affiliation Academia 1KU Leuven, Department of Computer Science, Leuven, Belgium. 2 Orebro University, Centre for Applied Autonomous Sensor Systems, Orebro, Sweden.
Pseudocode No No explicit pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes The code to replicate these experiments can be found at https://github.com/jjcmoon/hardness-nesy.
Open Datasets Yes We take the benchmarks of the last three competitions (2021, 2023, and 2023) and take the instances that are probabilistic and can be solved exactly by state-ofthe-art solvers (Lagniez & Marquis, 2017; Golia et al., 2021). As an easier benchmark, we also include the logical formula from the ROAD-R dataset (Giunchiglia et al., 2023)...
Dataset Splits No The paper does not explicitly state train/validation/test dataset splits with percentages or specific sample counts for the main experiments. It mentions training iterations for a pedagogical MNIST-addition task but not for the MCC benchmarks.
Hardware Specification Yes All methods were executed on the same machine with an Intel Xeon E5-2690 CPU and used PyTorch to compute the gradients.
Software Dependencies No The paper mentions 'PyTorch' and specific solvers like 'CMSGen', 'Eval Max SAT', and 'd4 knowledge compiler', but does not provide version numbers for PyTorch or other general software dependencies (e.g., Python version, specific libraries) that would be needed for reproducible setup.
Experiment Setup Yes The weights are initialized with a Gaussian distribution with a mean of 1/2. and All approximate methods got a timeout of 5 minutes per gradient. and maximum of 10000 iterations. Also specific parameters like Weight ME (k=100), Straight-through estimator (s=10), Gumbel-Softmax estimator (s=10, τ=2) in Table 2.