A Top-Down Tree Model Counter for Quantified Boolean Formulas

Authors: Florent Capelli, Jean-Marie Lagniez, Andreas Plank, Martina Seidl

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results indicate that our proposed approach for counting tree models of QBF formulas is highly efficient in practice, surpassing existing state-of-the-art methods designed for this specific purpose.
Researcher Affiliation Academia Florent Capelli1 , Jean-Marie Lagniez1 , Andreas Plank2 and Martina Seidl2 1Univ. Artois, CNRS, Centre de Recherche en Informatique de Lens (CRIL), F-62300 Lens, France 2Institute for Symbolic Artificial Intelligence, JKU Linz, Austria
Pseudocode Yes Algorithm 1: d4-QBF and Algorithm 2: Count Reduct are provided in the paper.
Open Source Code Yes We implemented the algorithm introduced above in a C++ tool called d4-QBF.1 This tool builds upon the infrastructure provided by the propositional model counter D4 [Lagniez and Marquis, 2019].2 1https://zenodo.org/records/11153123
Open Datasets Yes Unique-SAT and Random Benchmarks [Plank et al., 2023]. These formulas are true 2QBFs with prefix X Y for which the exact model count is known. ... Crafted instances [Heisinger and Seidl, 2023].
Dataset Splits No The paper uses pre-existing benchmark datasets but does not explicitly provide specific training/validation/test dataset splits with percentages, counts, or detailed splitting methodologies for its experiments.
Hardware Specification Yes All experiments were executed on a compute cluster with dual-socket 16 core AMD EPYC 7313 processors with 3.7 GHz and 256 GB main memory. We run all tools on a single core with a timeout of 900 seconds. The memory limit was set to 32GB.
Software Dependencies No The paper mentions that its tool `d4-QBF` is a C++ tool and builds upon `D4`, but it does not specify explicit version numbers for `D4` or any other software dependencies required to replicate the experimental environment.
Experiment Setup No The paper states general execution limits like 'timeout of 900 seconds' and 'memory limit was set to 32GB', but it does not provide specific hyperparameters or detailed system-level training configurations for their algorithm beyond that.