Should Algorithms for Random SAT and Max-SAT Be Different?

Authors: Sixue Liu, Gerard de Melo

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results illustrate that Pro MS outperforms many state-of-the-art local search solvers on random Max-SAT benchmarks. We instantiated and empirically evaluated these in a new probabilistic algorithm, called Pro MS (Probability Distribution-based Max-SAT Solving), which is shown to outperform the previous state-of-the-art on random instances. Section 5 Experiments.
Researcher Affiliation Collaboration Sixue Liu Microsoft Research Redmond, WA, USA Gerard de Melo Rutgers University New Brunswick, NJ, USA
Pseudocode Yes Algorithm 1: Pro MS
Open Source Code No The paper mentions downloading third-party tools like 'prob SAT... downloaded from EDACC' and 'Max Walk SAT... obtained from its homepage', but it does not provide concrete access to its own Pro MS source code.
Open Datasets No The paper mentions using 'benchmark data from the Max-SAT Evaluation 2012' and 'random 3-CNF instances from the Max-SAT Evaluation 2016', with URLs to the evaluation websites. However, it does not provide direct links to dataset files or formal citations that include author names and years for the datasets themselves, which would enable direct access to the data.
Dataset Splits No The paper states, 'In order to tune these [parameters], we use the benchmark data from the Max-SAT Evaluation 2012.' This implies a validation process, but it does not provide specific dataset split information (percentages, sample counts, or explicit references to predefined validation splits) needed to reproduce the data partitioning for validation.
Hardware Specification Yes All experiments are carried out on a machine with Intel Core Xeon E5-2650 2.60GHz CPU and 32GB RAM under Linux.
Software Dependencies No The paper states 'Pro MS is implemented in C, and compiled with gcc using the -O3 option for optimization.' It mentions software names (C, gcc) but does not provide specific version numbers for them or any other libraries or dependencies used, which is required for reproducibility.
Experiment Setup Yes Our approach has thee parameters: η, ζ and δ. In order to tune these, we use the benchmark data from the Max-SAT Evaluation 2012... Based on a grid search, we set the parameters as η = 2.5, ζ = r+17.5, and δ = 0.4 r 1.4, where r is the ratio. The cutoff time is set to 300 seconds for all instances and all solvers.