Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Improving Model Counting by Leveraging Definability
Authors: Jean-Marie Lagniez, Emmanuel Lonca, Pierre Marquis
IJCAI 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show the computational bene๏ฌts which can be achieved by taking advantage of our preprocessing technique for model counting. |
| Researcher Affiliation | Academia | CRIL-CNRS and Universit e d Artois, Lens, France email:EMAIL |
| Pseudocode | Yes | Algorithm 1: B + E, Algorithm 2: B, Algorithm 3: E |
| Open Source Code | Yes | The benchmarks used, the implementation (runtime code) of B + E, some detailed empirical results, and a full-proof version of the paper are available on line from www.cril.fr/KC/. |
| Open Datasets | Yes | In our experiments, we have considered 703 CNF instances from the SAT LIBrary.3 ... 3www.cs.ubc.ca/ hoos/SATLIB/index-ubc.html |
| Dataset Splits | No | The paper evaluates model counting performance on a collection of CNF instances and does not describe explicit training, validation, or test dataset splits in the context of machine learning model training. |
| Hardware Specification | Yes | Our experiments have been conducted on Intel Xeon E5-2643 (3.30 GHz) processors with 32 Gi B RAM on Linux Cent OS. |
| Software Dependencies | No | The paper mentions several software tools like Cachet, Sharp SAT, and C2D, and also refers to pmc, but it does not specify version numbers for any of these software components to ensure reproducibility. |
| Experiment Setup | Yes | A time-out of 1h and a memory-out of 7.6 Gi B has been considered for each instance. We set max#Res to 500. C2D has been invoked with the following options -count -in memory -smooth all |