Learning SMT(LRA) Constraints using SMT Solvers

Authors: Samuel Kolb, Stefano Teso, Andrea Passerini, Luc De Raedt

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically evaluate our approach on both synthetic instances and benchmark problems taken from the SMT-LIB benchmarks repository.
Researcher Affiliation Academia KU Leuven, Belgium University of Trento, Italy
Pseudocode Yes Algorithm 1 Incremental SMT Constraint Learner (INCAL)
Open Source Code Yes Our experimental setup uses Math SAT as SMT solver, and can be found at: https://smtlearning.github.io.
Open Datasets Yes We implemented INCAL and evaluated it empirically on both synthetic instances of increasing complexity and instances taken from the SMT-LIB benchmark repository [Barrett et al., 2010].
Dataset Splits No The paper states that 'For learning a set of 1000 randomly sampled examples is used and another set of 1000 random examples is used for measuring accuracy,' implying a training and test set. However, it does not explicitly specify a validation set or detailed train/validation/test splits with percentages or sample counts for reproducibility.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU model, CPU model, memory specifications).
Software Dependencies No The paper states 'Our experimental setup uses Math SAT as SMT solver' but does not provide a specific version number for Math SAT or any other software dependencies.
Experiment Setup Yes For learning a set of 1000 randomly sampled examples is used and another set of 1000 random examples is used for measuring accuracy. The speed is measured by summing up the time taken by the incremental steps for the right parameters k and h. For every setting evaluated using synthetic problems, 100 instances are generated and average results are shown. We use a timeout of 200s per individual learning task.