Predicting Propositional Satisfiability via End-to-End Learning
Authors: Chris Cameron, Rex Chen, Jason Hartford, Kevin Leyton-Brown3324-3331
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluated four variants of deep network architectures: the standard exchangeable architecture and message passing Neuro SAT architecture predicting only satisfiability, and an extension to both architectures where we jointly predicted satisfying variable assignments. and We evaluated prediction accuracy for the four variants of deep neural network architectures and the four baselines. |
| Researcher Affiliation | Academia | Chris Cameron, Rex Chen, Jason Hartford, Kevin Leyton-Brown Department of Computer Science University of British Columbia {cchris13, rexctran, jasonhar, kevinlb}@cs.ubc.ca |
| Pseudocode | No | The paper describes its model architecture using mathematical equations and textual explanations but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states: "For the exchangeable architecture, we adapted the public implementation of exchangeable matrix layers from Hartford et al. (2018)." and "For the the message passing network, we used Selsam et al. (2019) s implementation in the Tensorflow framework." This indicates they used existing public implementations but does not explicitly state that their specific code for this paper's methodology or experiments is released. The provided URL is for data and generation process. |
| Open Datasets | Yes | To evaluate our approach, we generated uniform-random 3-SAT instances at the solubility phase transition. and For the raw data and the full details of our data generation process, please see http://www.cs.ubc.ca/labs/beta/Projects/End2End_SAT/. |
| Dataset Splits | Yes | For our experiments, we randomly split both satisfiable and unsatisfiable problems of each instance size into training, validation, and test sets according to an 80:10:10 ratio. |
| Hardware Specification | No | The paper states: "This work leveraged compute resources provided by Compute Canada, CFI, BCKDF, and NVIDIA". This provides general names of compute resource providers but does not specify exact GPU/CPU models, processor types, or memory amounts used for the experiments. |
| Software Dependencies | No | The paper mentions using "Tensorflow framework" and "Adam optimizer" but does not provide specific version numbers for these or any other key software components, libraries, or programming languages. |
| Experiment Setup | Yes | For training the exchangeable architecture, we used the Adam optimizer with a learning rate of 0.0001, and minibatches of 32 examples. and We used the hyperparameters of Selsam et al. (2019): a dimension of 128 for clause and variable embeddings, 3 hidden layers and a linear output layer for each of the MLPs, a scaling factor of 10^-10 for the ℓ2 norm to regularize the parameters, 32 iterations of message passing for each problem, a learning rate of 0.00002 for Adam, and a clipping ratio of 0.65 for clipping gradients by global norm. |