Provably Adversarially Robust Nearest Prototype Classifiers

Authors: Václav Voráček, Matthias Hein

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Experiments The code for experiments is available in our repository where we also provide the training details. We first evaluate the improvements in the certification of better lower bounds resp. exact computation compared to the ones of (Saralajew et al., 2020) as well as (Wang et al., 2019). In a second set of experiments we compare our ℓp-PNPC to the ℓp-NPC of (Saralajew et al., 2020) resp. to nearest neighbor classification as well as deterministic and probabilistic certification techniques for neural networks on MNIST and CIFAR10 (see App. H).
Researcher Affiliation Academia V aclav Vor aˇcek 1 Matthias Hein 1 1University of T ubingen, Germany. Correspondence to: V aclav Vor aˇcek <vaclav.voracek@uni-tuebingen.de>, Matthias Hein <matthias.hein@uni-tuebingen.de>.
Pseudocode Yes Algorithm 1 Sketch of certification algorithm for correctly classified point z
Open Source Code Yes The code is available in our repository. [...] The code for experiments is available in our repository1 (1https://github.com/vvoracek/Provably-Adversarially-Robust-Nearest-Prototype-Classifiers.)
Open Datasets Yes Using efficient improved lower bounds we train our Provably adversarially robust NPC (PNPC), for MNIST which have better ℓ2-robustness guarantees than neural networks. [...] Our PNPC has on CIFAR10 higher certified robust accuracy than the empirical robust accuracy reported in (Laidlaw et al., 2021).
Dataset Splits No The paper mentions using training and test sets but does not specify validation splits or detailed percentages/counts for each split.
Hardware Specification No The training time is about a few hours on a laptop.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies like programming languages or libraries.
Experiment Setup No The paper states that training details are provided in their repository, but does not include specific hyperparameters or system-level training settings in the main text.