Tightening Robustness Verification of Convolutional Neural Networks with Fine-Grained Linear Approximation

Authors: Yiting Wu, Min Zhang11674-11681

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate it with open-source benchmarks, including Le Net and the models trained on MNIST and CIFAR. Experimental results show that Deep Cert outperforms other state-of-the-art robustness verification tools with at most 286.3% improvement to the certified lower bound and 1566.8 times speedup for the same neural networks.
Researcher Affiliation Academia Yiting Wu,1 Min Zhang1,2 1 Shanghai Key Laboratory for Trustworthy Computing, East China Normal University 2 Shanghai Institute of Intelligent Science and Technology, Tongji University
Pseudocode Yes Algorithm 1: Binary search for lower robustness bound
Open Source Code No The paper states 'We implement Deep Cert, the resulting verification toolkit.' and 'We implement our approach atop CNN-Cert in Python as an extension named Deep Cert.', but it does not provide any explicit statement or link regarding the open-sourcing of Deep Cert's code.
Open Datasets Yes We evaluate it with open-source benchmarks, including Le Net and the models trained on MNIST and CIFAR. Experimental results show that Deep Cert outperforms other state-of-the-art robustness verification tools with at most 286.3% improvement to the certified lower bound and 1566.8 times speedup for the same neural networks.
Dataset Splits No The paper mentions training on datasets and using 'test images' but does not specify the splits (e.g., percentages or counts) for training, validation, or testing subsets.
Hardware Specification Yes All the experiments were conducted on a workstation running an 8core Intel Xeon CPU E5-2620 v4, 32 GB of RAM, and an NVIDIA Tesla K80 GPU.
Software Dependencies No The paper states 'We implement our approach atop CNN-Cert in Python as an extension named Deep Cert.' While Python is mentioned, no specific version number for Python or any other libraries/dependencies (e.g., PyTorch, TensorFlow, etc.) is provided.
Experiment Setup No The paper describes network architectures but does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, number of epochs) or optimizer settings.