MixSATGEN: Learning Graph Mixing for SAT Instance Generation

Authors: Xinyan Chen, Yang Li, Runzhong Wang, Junchi Yan

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show the superiority of our method with both resemblance in structure and hardness, and general applicability.
Researcher Affiliation Academia Xinyan Chen 12, Yang Li 1, Runzhong Wang1, Junchi Yan 12 1Department of Computer Science and Engineering & 2Zhiyuan College Shanghai Jiao Tong University {moss_chen,yanglily,runzhong.wang,yanjunchi}@sjtu.edu.cn
Pseudocode Yes The algorithms for the training and generating process of Mix SATGEN in Sec. 3 are presented in Alg. 1 and Alg. 2.
Open Source Code Yes https://github.com/Thinklab-SJTU/Mix SATGEN
Open Datasets Yes The real-world SAT instances are collected from SATLIB benchmark library (Hoos & Stützle, 2000) and SAT Competition 2021 (Balyo et al., 2021).
Dataset Splits No The paper discusses the datasets used and refers to 'training dataset' and 'test instances' but does not provide specific training/validation/test dataset splits, percentages, or absolute counts for reproducibility.
Hardware Specification Yes All the experiments are performed on a single GPU of Ge Force RTX 3090. The affinity matrix K is calculated on an AMD Ryzen Threadripper 3970X 32-core CPU with 128GB memory.
Software Dependencies No The paper mentions software tools and solvers like Ca Di Ca L, Kissat, SBVA-Cadical, and Pygmtools, but it does not specify exact version numbers for these or any other software dependencies, which is required for reproducibility.
Experiment Setup Yes We first finetune the pretrained model with 0.0001 learning rate, 200 epoches and 4 iterations of message passing (Selsam et al., 2019).