MaxSAT by Improved Instance-Specific Algorithm Configuration
Authors: Carlos Ansotegui, Yuri Malitsky, Meinolf Sellmann
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on SAT show that this combination marks a significant step forward in our ability to tune algorithms instance-specifically. We then apply the new methodology to a number of Max SAT problem domains and show that the resulting solvers consistently outperform the best existing solvers on the respective problem families. In fact, the solvers presented here were independently evaluated at the 2013 Max SAT Evaluation where they won six of the eleven categories. |
| Researcher Affiliation | Collaboration | Carlos Ansótegui Department of Computer Science, University of Lleida, Spain carlos@diei.udl.es Yuri Malitsky Insight Centre for Data Analytics, University College Cork, Ireland yuri.malitsky@insight-centre.org Meinolf Sellmann IBM Watson Research Center, Yorktown Heights, NY 10598, USA meinolf@us.ibm.com |
| Pseudocode | Yes | Algorithm 1: Instance-Specific Algorithm Configuration ISAC-Learn(A, T, F, κ) ( F, s, t) Normalize(F) (k, C, S) Cluster (T, F, κ) for all i = 1, . . . , k do Pi GGA(A, Si) Return (k, P, C, s, t) ISACRun(A, x, k, P, C, d, s, t) f Features(x) fi 2(fi/si) ti i i mini(|| f Ci||) Return A(x, Pi) |
| Open Source Code | No | The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper. |
| Open Datasets | Yes | We used three benchmarks in our numeric analysis obtained from the 2012 Max SAT Evaluation: (i) the 8 families of partial Max SAT crafted instances with a total of 372 instances, (ii) the 13 families of partial Max SAT industrial instances with a total of 504 instances, and the mixture of both sets. This data was split into training and testing sets. Crafted had 130 testing and 242 training, while Industrial instances were split so there awere 170 testing and 334 training. Our third dataset merged Crafted and Industrial instances and had 300 testing and 576 training instances. |
| Dataset Splits | Yes | We split our experiments into two parts. We first show the performance of ISAC+ on partial Max SAT instances, crafted instances, industrial instances, and finally combining both. In the second set of experiments we train solvers for Max SAT (MS), Weighted Max SAT (WMS), Partial Max SAT (PMS), and Weighted Partial Max SAT (WPMS). In these datasets we will combine instances from the crafted, industrial and random subcategories. Crafted had 130 testing and 242 training, while Industrial instances were split so there awere 170 testing and 334 training. Our third dataset merged Crafted and Industrial instances and had 300 testing and 576 training instances. In this section we conduct a 10-fold cross validation on the four categories of the 2012 Max SAT Evaluation (Argelich et al. 2012). These are plain Max SAT instances, weighted Max SAT, partial Max SAT, and weighted partial Max SAT. |
| Hardware Specification | Yes | The times were measured on dual Intel Xeon 5540 (2.53 GHz) quad-core Nehalem processors and 24 GB of DDR-3 memory (1333 GHz). We conducted our experimentation on the same environment as the Max SAT Evaluation 2012 (Argelich et al. 2012): operating system Rocks Cluster 4.0.0 Linux 2.6.9, processor AMD Opteron 248 Processor 2 GHz, memory 0.5 GB and compilers GCC 3.4.3 and javac JDK 1.5.0. |
| Software Dependencies | Yes | We conducted our experimentation on the same environment as the Max SAT Evaluation 2012 (Argelich et al. 2012): operating system Rocks Cluster 4.0.0 Linux 2.6.9, processor AMD Opteron 248 Processor 2 GHz, memory 0.5 GB and compilers GCC 3.4.3 and javac JDK 1.5.0. The solvers we run on the the partial Max SAT industrial and crafted instances were: QMax Sat-g2 (this is the solver we tune), pwbo2.0, QMax Sat, PM2, Shin Max Sat, Sat4j, WPM1, wbo1.6, WMax Satz+, WMax Satz09, akmaxsat, akmaxsat_ls, iut_rr_rv and iut_rr_ls. |
| Experiment Setup | Yes | We used a timeout of 50 seconds when training these solvers, but employed a 600 seconds timeout to evaluate the solvers on each respective dataset. When running algorithm A on an input instance x, ISAC first computes the features of the input and normalizes them using the previously stored scaling and translation values for each feature. Then, the instance is assigned to the nearest cluster. Finally, ISAC runs A on x using the parameters for this cluster. In our case, we restrict clusters to have at least 50 instances. The particular version of q Max SAT, q Max SATg2, that we use in our evaluation was the winner for the industrial partial Max SAT category at the Max SAT 2012 Evaluation. q Max SAT inherits its parameters from glucose: rnd-init, -luby, -rnd-freq, -var-dec, -cla-decay, -rinc and -rfirst (Audemard and Simon 2012). all experiments were run with a 2,100 second timeout. |