Efficient Certified Training and Robustness Verification of Neural ODEs

Authors: Mustafa Zeqiri, Mark Niklas Mueller, Marc Fischer, Martin Vechev

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In an extensive evaluation on computer vision (MNIST and FMNIST) and time-series forecasting (PHYSIO-NET) problems, we demonstrate the effectiveness of both our certified training and verification methods. We train NODE based networks with standard, adversarial, and provable training (ϵt {0.11, 0.22}) and certify robustness to ℓ -norm bounded perturbations of radius ϵ as defined in Eq. (1). We report means and standard deviations across three runs at different perturbation levels (ϵ {0.1, 0.15, 0.2}) depending on the training method in Table 1.
Researcher Affiliation Academia Mustafa Zeqiri, Mark Niklas Müller, Marc Fischer & Martin Vechev Department of Computer Science ETH Zurich, Switzerland mzeqiri@ethz.ch, {mark.mueller,mark.fischer,martin.vechev}@inf.ethz.ch
Pseudocode No The paper does not contain any clearly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes We release our code at https://github.com/eth-sri/GAINS
Open Datasets Yes We conduct experiments on MNIST (Le Cun et al., 1998), FMNIST (Xiao et al., 2017), and PHYSIO-NET (Silva et al., 2012).
Dataset Splits No The paper does not explicitly provide training/validation/test dataset splits with percentages or sample counts in the main text. It refers to 'test set samples' but not the overall partitioning.
Hardware Specification Yes We implement GAINS in Py Torch2 (Paszke et al., 2019) and evaluate all benchmarks using single NVIDIA RTX 2080Ti.
Software Dependencies Yes We implement GAINS in Py Torch2 (Paszke et al., 2019) and evaluate all benchmarks using single NVIDIA RTX 2080Ti. To evaluate CURLS on the Linear Constraint Aggregation problem (LCAP), we compare it to an LPbased approach based on Eq. (7) and implemented using a commercial LP solver (GUROBI (Gurobi Optimization, LLC, 2022)).
Experiment Setup Yes We provide detailed hyperparameter choices in App. D and E.