Efficient Neural Network Verification via Layer-based Semidefinite Relaxations and Linear Cuts
Authors: Ben Batten, Panagiotis Kouvaros, Alessio Lomuscio, Yang Zheng
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on a set of benchmark networks show that the approach here proposed enables the verification of more instances compared to other relaxation methods. The results also demonstrate that the SDP relaxation here proposed is one order of magnitude faster than previous SDP methods. |
| Researcher Affiliation | Academia | Ben Batten , Panagiotis Kouvaros , Alessio Lomuscio , Yang Zheng Department of Computing, Imperial College London, UK {b.batten20, p.kouvaros, a.lomuscio, y.zheng}@imperial.ac.uk |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states that they implemented their methods and used various tools, but it does not provide any link to their source code or explicitly state that their code is being released. |
| Open Datasets | Yes | We considered eight fully connected Re LU networks trained on the MNIST dataset. |
| Dataset Splits | No | The paper mentions evaluating on the test set ('first 100 images from the MNIST test set') but does not specify a separate training or validation split for the experiments it describes. |
| Hardware Specification | Yes | We performed our experiments on an Intel(R) i910850K CPU 3.60GHz machine with 32 GB of RAM, except for SDP-FO which was carried out on an Intel i71065G7 with 15GB RAM, due to a different implementation from [Dathathri et al., 2020]. |
| Software Dependencies | No | The paper mentions using MOSEK [Mosek, 2015], YALMIP [Lofberg, 2004], and Sparse CoLO [Fujisawa et al., 2009] but does not provide specific version numbers for these software dependencies, other than the publication year in the citation. |
| Experiment Setup | Yes | We varied the perturbation radius, ϵ, from 0.01 to 0.05; 2) Three NNs from [Raghunathan et al., 2018]: MLP-SDP, MLP-LP, and MLP-Adv. We followed closely the setup described in [Raghunathan et al., 2018; Dathathri et al., 2020], and tested a perturbation radius ϵ = 0.1; 3) Four deep NNs from [Singh et al., 2019a]: 6 100 (ϵ = 0.026), 9 100 (ϵ = 0.026), 6 200 (ϵ = 0.015), 9 200 (ϵ=0.015). |