Learning to Solve SMT Formulas

Authors: Mislav Balunovic, Pavol Bielik, Martin Vechev

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We performed an extensive experimental evaluation of our approach on formulas of varying complexity across 3 different theories (QF_NRA, QF_BV and QF_NIA). We show that our synthesized strategies solve 17% more formulas and are up to 100 runtime improvement over a state-of-the-art SMT solver.
Researcher Affiliation Academia Mislav Balunovi c, Pavol Bielik, Martin Vechev Department of Computer Science ETH Zürich Switzerland
Pseudocode Yes Algorithm 1: Iterative algorithm used to train policy π
Open Source Code Yes Finally, we make our tool, called fast SMT, datasets and experiments available at http://fastsmt.ethz.ch/.
Open Datasets Yes We implemented our method in a system called fast SMT and evaluated its effectiveness on 5 different datasets APro VE [19, 5], hycomp [7], core [6], leipzig [8] and Sage2 [9, 21]. These contain formulas of varying complexity across 3 logics QF_NRA, QF_BV and QF_NIA. ... All datasets are part of the official SMT-LIB benchmarks [10] and encode formulas from both academic and industrial tools (i.e. Sage and APro VE).
Dataset Splits Yes For all datasets we split formulas in training, validation and test set in predetermined ratios.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for experiments.
Software Dependencies No The paper mentions PyTorch and FastText but does not provide specific version numbers for these or other software dependencies of their method.
Experiment Setup Yes To train our models we use 10 iterations of Algorithm 1 with exponentially decaying exploration rate to choose between policy and random action. We train the bilinear model using Fast Text [26] for 5 epochs with learning rate 0.1 and 20 hidden units. We implemented the neural network model in Py Torch [37] and trained using a learning rate 10 4, mini batch size 32, Adam optimizer [31] with Xavier initialization [20] and early stopping.