Learning-Based Abstractions for Nonlinear Constraint Solving

Authors: Sumanth Dathathri, Nikos Arechiga, Sicun Gao, Richard M. Murray

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section details the results of experiments on various benchmark sets on a 32-core AMD Opteron machine at 2.3 GHz, with 96 GB of RAM 1. Experiments show that the proposed techniques significantly improve the performance of state-of-the-art constraint solvers on many challenging benchmarks.
Researcher Affiliation Collaboration Sumanth Dathathri 1, Nikos Arechiga2, Sicun Gao3, and Richard M. Murray1 1Computing and Mathematical Sciences, California Institute of Technology 2Toyota Info Technology Center USA 3Computer Science and Engineering, University of California, San Diego
Pseudocode Yes Algorithm 1: Sampling Procedure. Algorithm 2: Algorithm to learn abstractions for sets of nonlinear arithmetic constraints
Open Source Code No The paper provides a link to 'Benchmark problem instances' but does not explicitly state that the source code for their methodology is available or provide a link to it.
Open Datasets Yes Benchmark problem instances can be found at https:// github.com/dathath/IJCAI_2017_SD
Dataset Splits No The paper discusses sampling to generate instances for learning but does not specify explicit training, validation, or test dataset splits in terms of percentages, counts, or predefined citations for reproducibility.
Hardware Specification Yes This section details the results of experiments on various benchmark sets on a 32-core AMD Opteron machine at 2.3 GHz, with 96 GB of RAM 1.
Software Dependencies No The paper mentions using 'd Real' and 'z3' constraint solvers, and 'K-means' for clustering, but does not specify version numbers for any software dependencies.
Experiment Setup Yes A timeout of 200 seconds is set for abstracting each subset, if the abstraction procedure fails to finish in this period the constraints are used as is. The Hamming distance threshold here was set to 4, which results in subsets with 3 constraints each. The hamming distance threshold is set to 3 and n is set at 5. The number of clusters was set to 2.