Semidefinite relaxations for certifying robustness to adversarial examples

Authors: Aditi Raghunathan, Jacob Steinhardt, Percy S. Liang

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the performance of our certificate (7) on neural networks trained using different robust training procedures, and compare against other certificates in the literature. Networks. We consider feedforward networks that are trained on the MNIST dataset of handwritten digits using three different robust training procedures. ... Table 1 presents the performance of the three different certification procedures on the three networks.
Researcher Affiliation Academia Aditi Raghunathan, Jacob Steinhardt and Percy Liang Stanford University {aditir, jsteinhardt, pliang}@cs.stanford.edu
Pseudocode No No structured pseudocode or algorithm blocks found.
Open Source Code Yes All code, data and experiments for this paper are available on the Codalab platform at https://worksheets.codalab.org/worksheets/ 0x6933b8cdbbfd424584062cdf40865f30/.
Open Datasets Yes Networks. We consider feedforward networks that are trained on the MNIST dataset of handwritten digits using three different robust training procedures.
Dataset Splits No The paper mentions using a 'holdout set' for hyperparameter tuning but does not provide specific percentages or counts for training, validation, and test splits.
Hardware Specification Yes On a 4-core CPU, the average SDP computation took around 25 minutes and the LP around 5 minutes per example.
Software Dependencies No The paper mentions 'YALMIP toolbox' and 'MOSEK as a backend' but does not provide specific version numbers for these software components.
Experiment Setup Yes The stepsize of the PGD attack was set to 0.1, number of iterations to 40, perturbation size ϵ=0.3 and weight on adversarial loss to 1/1.