Streamlining Variational Inference for Constraint Satisfaction Problems

Authors: Aditya Grover, Tudor Achim, Stefano Ermon

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that streamlined solvers consistently outperform decimation-based solvers on random k-SAT instances for several problem sizes, shrinking the gap between empirical performance and theoretical limits of satisfiability by 16.3% on average for k = 3, 4, 5, 6.
Researcher Affiliation Academia Aditya Grover, Tudor Achim, Stefano Ermon Computer Science Department Stanford University {adityag, tachim, ermon}@cs.stanford.edu
Pseudocode Yes Algorithm 1 Survey Inspired Decimation(V, C) and Algorithm 2 Survey Inspired Streamlining(V, C, T)
Open Source Code Yes Our solver is available publicly at https://github.com/ ermongroup/streamline-vi-csp.
Open Datasets No The paper describes how to generate "Random k-SAT instances" but does not provide concrete access information (URL, DOI, specific repository, or citation) for a pre-existing publicly available dataset.
Dataset Splits Yes We generate a set of 20 random k-SAT instances for every α and k. For these 20 “training” instances, we compute the empirical solver success rates varying T over {10, 20, . . . , 100}.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software like "Dimetheus" but does not provide specific version numbers for any software components or libraries.
Experiment Setup Yes In line with [7], we fix R = 0.01n and each success rate is the fraction of 100 instances solved for every combination of α and k considered. The constraint threshold is fixed to 2. The iteration threshold T is a hyperparameter set as follows. We generate a set of 20 random k-SAT instances for every α and k. For these 20 “training” instances, we compute the empirical solver success rates varying T over {10, 20, . . . , 100}. The best performing value of T on these train instances is chosen for testing on 100 fresh instances.