On the Verification of Neural ODEs with Stochastic Guarantees

Authors: Sophie Grunbacher, Ramin Hasani, Mathias Lechner, Jacek Cyranka, Scott A. Smolka, Radu Grosu11525-11535

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work, we present a thorough theoretical approach to the problem of providing safety guarantees for the class of time-continuous neural networks formulated as Neural ODEs. As the main result, we develop SLR, a differentiable stochastic Lagrangian reachability framework, formulated as a global optimization problem. In particular, we prove that SLR converges (Theorem 2) to tight ellipsoidal safe regions (Theorem 1), within O( ln γ(δ0/rbound)2n) number of iterations (Theorem 3).
Researcher Affiliation Academia Sophie Gruenbacher1, Ramin Hasani1,2, Mathias Lechner3, Jacek Cyranka4, Scott A. Smolka5, Radu Grosu1 1 Technische Universität Wien (TU Wien) 2 Massachusetts Institute of Technology (MIT) 3 Institute of Science and Technology Austria (IST Austria) 4 University of Warsaw 5 Stony Brook University
Pseudocode Yes Algorithm 1 Finding the local minimum; Algorithm 2 Computation of ϕL; Algorithm 3 Computing the Radius of the Safety Region; Algorithm 4 Stochastic Lagrangian Reachability
Open Source Code No The paper does not contain an explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper is theoretical and does not involve empirical training on a dataset; therefore, no information about public dataset availability for training is provided.
Dataset Splits No The paper is theoretical and does not describe empirical experiments involving dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe empirical experiments, therefore no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and focuses on mathematical proofs and algorithms. It does not mention any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on mathematical proofs and algorithms. It does not describe an empirical experimental setup with hyperparameters or training settings.