Reachability Analysis of Deep Neural Networks with Provable Guarantees

Authors: Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The technique has been implemented and evaluated on a range of DNNs, demonstrating its efficiency, scalability and ability to handle a broader class of networks than state-of-the-art verification approaches.
Researcher Affiliation Academia 1 University of Oxford, Oxford, UK 2 University of Liverpool, Liverpool, UK
Pseudocode Yes In i-th iteration, we do the following sequentially: Compute yi = arg infx [a,b] H(x; Yi 1) as follows. Let z = min Zi 1 and k be the index of the interval [yk 1, yk] where z is computed. Then we let yi = yk 1 + yk / 2 - (w(yk) - w(yk 1)) / (2K) and have that yi (yk 1, yk). Let Yi = Yi 1 {yi}, then reorder Yi in ascending order, and update w(Yi) = w(Yi 1) {w(yi)}. Calculate zi 1 = w(yi) + w(yk 1) - K(yi - yk 1) and zi = w(yk) + w(yi) and update Zi = (Zi 1 \ {z }) {zi 1, zi}. Calculate the new lower bound li = infx [a,b] H(x; Yi) by letting li = min Zi, and updating Li = Li 1 {li}. Calculate the new upper bound ui = miny Yi w(y) by letting ui = min{ui 1, w(yi)}.
Open Source Code Yes Available on https://github.com/trustAI/DeepGO.
Open Datasets Yes Seven convolutional neural networks, represented as DNN1,...,DNN-7, were trained on the MNIST dataset.
Dataset Splits No The paper mentions training on the MNIST dataset and reports "Testing accuracies" but does not specify the exact training, validation, or test dataset splits (e.g., percentages or sample counts).
Hardware Specification Yes Our software is implemented in Matlab 2018a, running on a notebook computer with i7-7700HQ CPU and 16GB RAM.
Software Dependencies Yes Our software is implemented in Matlab 2018a
Experiment Setup Yes Testing accuracies range from 95% to 99%, and ϵ = 0.05 is used in our experiments.