Neural Abstractions

Authors: Alessandro Abate, Alec Edwards, Mirco Giacobbe

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the efficacy of our method over multiple dynamical models from a standard benchmark set for the verification of nonlinear systems [64], as well as additional locally non-Lipschitz models, and compare our approach with Flow*, the state-of-the-art verification tool for nonlinear models [32,33,35]. We evaluate both approaches in safety verification using flowpipe propagation, which computes the set of reachable states from a given set of initial states up to a given time horizon. Our experiments demonstrate that our approach performs comparably with Flow* for Lipschitz continuous model, and succeeds with non-Lipschitz models that are out of range for Flow* and violate the working assumptions of many verification tools.
Researcher Affiliation Academia Alessandro Abate Department of Computer Science University of Oxford, UK Alec Edwards Department of Computer Science University of Oxford, UK Mirco Giacobbe School of Computer Science University of Birmingham, UK
Pseudocode No The paper describes algorithms in text and refers to a more detailed description in the appendix but does not include any formally structured 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes The code is available at https://github.com/aleccedwards/neural-abstractions-nips22.
Open Datasets Yes We demonstrate the efficacy of our method over multiple dynamical models from a standard benchmark set for the verification of nonlinear systems [64]
Dataset Splits No The paper describes a CEGIS loop for synthesis, where a certifier checks validity and provides counterexamples, but it does not specify explicit training/validation/test dataset splits with percentages or counts.
Hardware Specification No Table 1 lists computation times but does not specify any hardware details such as GPU/CPU models, memory, or processor types used for running the experiments.
Software Dependencies No The paper mentions using 'Space Ex' and 'Flow*' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Our setup is as follows. Firstly, for a given benchmark model we define a finite time horizon T, a region of initial states X0 and a region of bad states XB. Then, we run flowpipe computations with Flow* using high-order Taylor models. Similarly we run the procedure described in Section 3, and construct a hybrid automaton as described in Section 4 to perform flowpipe computations using Space Ex. We present the results in Table 1. In the table, we show the Taylor model order (TM) and time step used within Flow*, as well as the structure of the neural networks used for neural abstractions. For example, we denote a network with two hidden layers with h1 neurons in the first layer and h2 neurons in the second hidden layer as [h1, h2]. The hyper-parameters for the learning procedure are chosen heuristically, but we include the relevant configuration files in the supplementary material.