Scalable Verification of Quantized Neural Networks

Authors: Thomas A. Henzinger, Mathias Lechner, Đorđe Žikelić3787-3795

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that our proposed methods allow a speedup of up to three orders of magnitude over existing approaches.
Researcher Affiliation Academia Thomas A. Henzinger, Mathias Lechner, Ðor de Žikeli c IST Austria Klosterneuburg, Austria {tah,mlechner, dzikelic}@ist.ac.at
Pseudocode No The paper describes an algorithm and its rules being in a technical report, but no pseudocode or algorithm blocks are provided in the paper itself.
Open Source Code Yes Both benchmark networks are publicly available 3 (Footnote 3: https://github.com/mlech26l/qnn_robustness_benchmarks)
Open Datasets Yes Our first evaluation considers the adversarial robustness verification of image classifier trained on the MNIST dataset (Le Cun et al. 1998). In the second evaluation, we repeat the experiment on the slightly more complex Fashion MNIST dataset (Xiao, Rasul, and Vollgraf 2017).
Dataset Splits No The paper specifies how many test samples are used for evaluation ('we check the first 100 test samples with an attack radius of ε = 1, the next 100 test samples with ε = 2, and the next 200 test samples with ε = 3 and ε = 4 respectively.') but does not provide explicit training/validation/test dataset splits for reproducibility.
Hardware Specification Yes All experiments are run on a 14-core Intel W-2175 CPU with 64GB of memory.
Software Dependencies No We used the boolector (Niemetz, Preiner, and Biere 2015) with the SAT-solvers Lingeling(Biere 2017) (only baseline) and Ca Di Cal (Biere 2019) (baseline + our improvements) as SAT-backend. While specific software are named, explicit version numbers (e.g., v1.2.3) are not provided, only publication years.
Experiment Setup Yes The network studied in our benchmark consists of four fully-connected layers (784,64,32,10), resulting in 52,650 parameters in total. It was trained using a quantization-aware training scheme with a 6-bit quantization. We check the l robustness of networks against adversarial attacks (Szegedy et al. 2013). In particular, for MNIST we check the first 100 test samples with an attack radius of ε = 1, the next 100 test samples with ε = 2, and the next 200 test samples with ε = 3 and ε = 4 respectively.