NeuroBack: Improving CDCL SAT Solving using Graph Neural Networks
Authors: Wenxi Wang, Yang Hu, Mohit Tiwari, Sarfraz Khurshid, Kenneth McMillan, Risto Miikkulainen
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results on all SAT problems from SATCOMP-2022 SAT (2022) and SATCOMP-2023 SAT (2023) show that Neuro Back allows Kissat to solve up to 5.2% and 7.4% more problems, respectively. The experiments thus demonstrate that Neuro Back is a practical neural approach to improving CDCL SAT solvers. |
| Researcher Affiliation | Academia | Wenxi Wang, Yang Hu, Mohit Tiwari, Sarfraz Khurshid, Ken Mc Millan, Risto Miikkulainen The University of Texas at Austin Austin, TX 78712, USA wenxiw@utexas.edu,huyang@utexas.edu,tiwari@austin.utexas.edu, khurshid@utexas.edu,kenmcm@cs.utexas.edu,risto@cs.utexas.edu |
| Pseudocode | No | The paper describes the approach and model architecture in text and diagrams but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code of Neuro Back model and Neuro Back-Kissat is publicly available at https: //github.com/wenxiwang/neuroback. |
| Open Datasets | Yes | To train Neuro Back, a new dataset called Data Back containing 120,286 data samples is created... Data Back is publicly available at https://huggingface.co/datasets/ neuroback/Data Back |
| Dataset Splits | Yes | The Neuro Back model was pre-trained on the entire Data Back-PT dataset, then fine-tuned on a random 90% of Data Back-FT samples, and evaluated on the remaining 10% as a validation set. |
| Hardware Specification | Yes | All experiments were run on an ordinary commodity computer with one NVIDIA Ge Force RTX 3080 GPU (10GB memory), one AMD Ryzen Threadripper 3970X processor (64 logical cores), and 256GB RAM. |
| Software Dependencies | No | The paper mentions 'implemented using Py Torch Paszke et al. (2019) and Py Torch Geometric Fey & Lenssen (2019)' but does not provide specific version numbers for these software dependencies, which are necessary for full reproducibility. |
| Experiment Setup | Yes | Binary Cross Entropy (BCE) loss is adopted as our loss function. Besides, Adam W optimizer Loshchilov & Hutter (2017) with a learning rate of 10 4 is applied for model pre-training and fine-tuning. The number of epoch is set to 40 for pre-training, and 60 for fine-tuning... the number of blocks in the GNN subnet is set to the maximum diameter of the graph representation (i.e., L = 4). To ensure the accuracy of our model, while taking into account our limited GPU memory, the number of both GSA and LSA blocks are set to three (i.e., M = 3 and N = 3). |