Learning To Solve Circuit-SAT: An Unsupervised Differentiable Approach

Authors: Saeed Amizadeh, Sergiy Matusevych, Markus Weimer

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show the superior out-of-sample generalization performance of our framework compared to the recently developed Neuro SAT method.
Researcher Affiliation Industry Saeed Amizadeh, Sergiy Matusevych, Markus Weimer Microsoft Redmond, WA 98052 {saamizad,sergiym,Markus.Weimer}@microsoft.com
Pseudocode No The paper describes algorithms and models using mathematical formulations and descriptive text, but does not include any explicit pseudocode blocks or labeled algorithm listings.
Open Source Code No The paper does not contain any statement or link indicating that the source code for the methodology is openly available.
Open Datasets No We have used the generation process proposed in the Neuro SAT paper Selsam et al. (2018) to generate random k-SAT CNF pairs (with k stochastically set according to the default settings in Selsam et al. (2018)). These pairs are then directly fed to Neuro SAT for training. For our method, on the other hand, we first need to convert these CNFs into circuits.
Dataset Splits No We have trained a DG-DAGRNN model (i.e. our framework) and a Neuro SAT model on a dataset of 300K SAT and UNSAT pairs generated according to the scheme proposed in Selsam et al. (2018).
Hardware Specification No However, since we ran our experiments only on one GPU with limited memory, we had to limit the training sample size for the purpose of experimentation.
Software Dependencies No For training, we have used the Adam optimization algorithm with learning rate of 10-5, weight decay of 10-10 and gradient clipping norm of 0.65.
Experiment Setup Yes For training, we have used the Adam optimization algorithm with learning rate of 10-5, weight decay of 10-10 and gradient clipping norm of 0.65. We have also applied a dropout mechanism for the aggregator function during training with the rate of 20%. For the Neuro SAT model, we have used the default hyper-parameter settings proposed in Selsam et al. (2018).