SAT Competition 2016: Recent Developments
Authors: Tomas Balyo, Marijn Heule, Matti Jarvisalo
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We give an overview of SAT Competition 2016, the 2016 edition of the famous competition for Boolean satisfiability (SAT) solvers with over 20 years of history. A key aim is to point out what s hot in SAT competitions in 2016, i.e., new developments in the competition series, including new competition tracks and new solver techniques implemented in some of the award-winning solvers. |
| Researcher Affiliation | Academia | Tomáš Balyo Karlsruhe Institute of Technology Karlsruhe, Germany Marijn J. H. Heule Department of Computer Science The University of Texas at Austin, USA Matti Järvisalo HIIT, Department of Computer Science University of Helsinki, Finland |
| Pseudocode | No | No pseudocode or algorithm blocks are found in the paper. |
| Open Source Code | No | The paper mentions that the No-Limits track removed restrictions like 'closed source solvers' to encourage non-academic participation, indicating that some solvers might be open source. However, the paper itself, authored by Balyo, Heule, and Järvisalo, does not state that they are releasing source code for the analysis or overview presented in this paper. |
| Open Datasets | Yes | A further central role of the SAT Competition series is to provide the research community with systematically constructed benchmark sets on a yearly basis, which nowadays have become a key ingredient in empirical evaluation of SAT solver techniques in published SAT research. The 2016 edition of the SAT Competition benchmark suites partitioned in terms of the traditional main tracks: application, crafted, and random instances. Furthermore, the website offers the 2016 edition of the SAT Competition benchmark suites partitioned in terms of the traditional main tracks: application, crafted, and random instances. |
| Dataset Splits | No | The paper describes how benchmarks were selected and categorized for the competition ('easy', 'medium', 'hard' based on solving time by specific solvers), and how solvers were ranked. However, it does not provide specific train/validation/test dataset splits for any model training described by the authors of this paper. |
| Hardware Specification | Yes | Solvers in the Parallel track ran on nodes with 24 cores (48 with hyper-threading) and 96 gigabytes of memory. |
| Software Dependencies | Yes | Benchmarks were considered easy if an established sequential solver, Mini SAT 2.2 (Eén and Sörensson 2004), required less than 600 seconds to solve it. |
| Experiment Setup | Yes | The Agile track had a 60-second timeout, while the other tracks used a 5000-second timeout. Satisfiable benchmarks were considered solved if the solver produced a correct solution, except for the No-Limit track in which only a claim of satisfiability was required. In the Main track, we considered unsatisfiable benchmarks solved only if the solver produced a valid DRAT proof (Heule 2016), which was checked by the DRAT-trim checker (Wetzler, Heule, and Jr. 2014). |