What’s Hot in the SAT and ASP Competitions

Authors: Marijn Heule, Torsten Schaub

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The SAT Competitions, organized since 2002, have been the driving force of SAT solver development. The performance of contemporary SAT solvers is incomparable to those of a decade ago. As a consequence, SAT solvers are used as the core search engine in many utilities, including tools for hardware and software verification. In 2014, the SAT Competition was organized for the ninth time. As in earlier editions, SAT Competition 2014 consisted of three categories: industrial, hard-combinatorial, and random benchmarks. Similarly, the ASP Competition has a great impact on system development in ASP. Unlike the SAT Competition, its emphasis lies on modeling and tracks reflecting different language fragments with an increasing level of expressiveness.
Researcher Affiliation Academia Marijn J. H. Heule The University of Texas at Austin, United States marijn@cs.utexas.edu Torsten Schaub University of Potsdam, Germany torsten@cs.uni-potsdam.de
Pseudocode No No pseudocode or algorithm blocks are provided in the paper.
Open Source Code No The paper references various tools and their respective websites (e.g., 'dlv. http://www.dlvsystem.com.', 'lp2normal. http://research.ics.aalto.fi/software/asp/lp2normal.', 'Potassco. http://potassco.sourceforge.net.'), but does not provide source code for the work presented in this paper itself.
Open Datasets No The paper mentions 'industrial, hard-combinatorial, and random benchmarks' for the SAT Competition, and 'problem instances' for ASP, but it does not provide concrete access information (link, DOI, specific citation with authors/year) for any publicly available dataset used for its analysis.
Dataset Splits No The paper describes aspects of the SAT and ASP competitions, such as 'A large timeout of 5,000 seconds was used for each solver/benchmark pair', but it does not specify any training/validation/test dataset splits for its analysis.
Hardware Specification Yes SC14 consumed 400,000 hours of CPU time, which were executed in only five days on the Lonestar cluster at Texas Advanced Computing Center (TACC) at The University of Texas at Austin. [...] The last two years, parallel SAT solver performance clearly improved, the gains on a 12-core machine are still modest.
Software Dependencies No The paper mentions various tools and systems such as 'lingeling', 'plingeling', 'dimetheus', 'DRAT-trim', 'clasp', and 'lp2normal', and the 'ASP-Core-2' language standard, but it does not specify version numbers for these or any other software dependencies needed to replicate the experimental setup.
Experiment Setup No The paper describes the setup of the SAT and ASP competitions, such as 'A large timeout of 5,000 seconds was used for each solver/benchmark pair' and the number of participating solvers. However, it does not provide specific hyperparameter values or detailed training configurations for a model or system developed by the authors of this paper.