Improving Model Counting by Leveraging Definability

Authors: Jean-Marie Lagniez, Emmanuel Lonca, Pierre Marquis

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show the computational benefits which can be achieved by taking advantage of our preprocessing technique for model counting.
Researcher Affiliation Academia CRIL-CNRS and Universit e d Artois, Lens, France email:{lagniez, lonca, marquis}@cril.univ-artois.fr
Pseudocode Yes Algorithm 1: B + E, Algorithm 2: B, Algorithm 3: E
Open Source Code Yes The benchmarks used, the implementation (runtime code) of B + E, some detailed empirical results, and a full-proof version of the paper are available on line from www.cril.fr/KC/.
Open Datasets Yes In our experiments, we have considered 703 CNF instances from the SAT LIBrary.3 ... 3www.cs.ubc.ca/ hoos/SATLIB/index-ubc.html
Dataset Splits No The paper evaluates model counting performance on a collection of CNF instances and does not describe explicit training, validation, or test dataset splits in the context of machine learning model training.
Hardware Specification Yes Our experiments have been conducted on Intel Xeon E5-2643 (3.30 GHz) processors with 32 Gi B RAM on Linux Cent OS.
Software Dependencies No The paper mentions several software tools like Cachet, Sharp SAT, and C2D, and also refers to pmc, but it does not specify version numbers for any of these software components to ensure reproducibility.
Experiment Setup Yes A time-out of 1h and a memory-out of 7.6 Gi B has been considered for each instance. We set max#Res to 500. C2D has been invoked with the following options -count -in memory -smooth all