Robustness of Classifiers to Universal Perturbations: A Geometric Perspective
Authors: Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard, Stefano Soatto
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We first evaluate the validity of the assumption of Theorem 2 for deep neural networks... In Fig. 7 (a), the average of κS(x) across points sampled from the validation set is shown... Fig. 7 (b) illustrates the fooling rate of the universal perturbations... |
| Researcher Affiliation | Academia | École Polytechnique Fédérale de Lausanne seyed.moosavi@epfl.ch Alhussein Fawzi University of California, Los Angeles fawzi@cs.ucla.edu Omar Fawzi École Normale Supérieure de Lyon omar.fawzi@ens-lyon.fr Pascal Frossard École Polytechnique Fédérale de Lausanne pascal.frossard@epfl.ch Stefano Soatto University of California, Los Angeles soatto@ucla.edu |
| Pseudocode | No | No pseudocode or algorithm blocks are present in the paper. |
| Open Source Code | No | For the networks on Image Net, we used the Caffe pre-trained models https://github.com/BVLC/ caffe/wiki/Model-Zoo. |
| Open Datasets | Yes | Le Net architecture... trained on the CIFAR-10 dataset., deep networks trained on Image Net (Caffe Net (Jia et al., 2014) and Res Net-152 (He et al., 2016)) |
| Dataset Splits | Yes | In Fig. 7 (a), the average of κS(x) across points sampled from the validation set is shown, fooling rate of the universal perturbations (on an unseen validation set), The accuracy of the network on the test set is 78.4%. |
| Hardware Specification | No | No specific hardware (GPU/CPU models, memory amounts, or detailed computer specifications) is mentioned. |
| Software Dependencies | No | Caffe Net (Jia et al., 2014) and Caffe pre-trained models are mentioned, but no specific software versions (e.g., Caffe version, Python version, library versions) are provided. |
| Experiment Setup | Yes | The Le Net architecture we used has two convolutional layers (filters of size 5) followed by three fully connected layers. We used SGD for training, with a step size 0.01 and a momentum term of 0.9 and weight decay of 10 4. and The Res Net-18 architecture was trained on the CIFAR-10 task with stochastic gradient descent with momentum and weight decay regularization. |