Tight Neural Network Verification via Semidefinite Relaxations and Linear Reformulations
Authors: Jianglin Lan, Yang Zheng, Alessio Lomuscio7272-7280
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We report experimental results based on MNIST neural networks showing that the method outperforms the state-of-the-art methods while maintaining acceptable computational overheads. For networks of approximately 10k nodes (1k, respectively), the proposed method achieved an improvement in the ratio of certified robustness cases from 0% to 82% (from 35% to 70%, respectively). |
| Researcher Affiliation | Academia | 1 Department of Computing, Imperial College London, UK 2 Department of Electrical and Computer Engineering, University of California San Diego, USA |
| Pseudocode | Yes | Algorithm 1: Implementation of layer RLT-SDP relaxation |
| Open Source Code | No | The paper mentions using third-party tools like YALMIP and MOSEK, but does not provide any statement or link indicating that the source code for their proposed method (RLT-SDP) is publicly available. |
| Open Datasets | Yes | In Experiment 2, we considered three groups of fully-connected Re LU NNs trained on the MNIST dataset. |
| Dataset Splits | No | The paper states that neural networks were 'trained on the MNIST dataset' and uses pre-existing models from other papers (e.g., 'MLP-Adv, MLP-LP and MLP-SDP from (Raghunathan, Steinhardt, and Liang 2018)'), implying that their training and validation splits were inherited from those works or standard practices. However, it does not explicitly describe the training/validation/test split percentages, absolute sample counts, or methodology for these splits within its own text, other than specifying 'All experiments were run on the first 100 images of the dataset' for evaluation. |
| Hardware Specification | Yes | The experiments were run on a Linux machine with an Intel i9-10920X 3.5 GHz 12-core CPU with 128 GB RAM. |
| Software Dependencies | No | The paper states that 'The optimisation problems were modelled by using YALMIP (Lofberg 2004) and solved using MOSEK (Andersen and Andersen 2000)', but it does not specify the version numbers for YALMIP or MOSEK, nor for any other programming languages or libraries. |
| Experiment Setup | Yes | Algorithm 1 was run with {ps}11 s=1 = {0, 0.1, , 1} and kmax = 11. |