Adversarial vulnerability for any classifier

Authors: Alhussein Fawzi, Hamza Fawzi, Omar Fawzi

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conclude with numerical experimental results showing that our bounds provide informative baselines to the maximal achievable robustness on several datasets. We evaluate our bounds in several experimental setups (CIFAR-10 and SVHN), and show that they yield informative baselines to the maximal achievable robustness.
Researcher Affiliation Collaboration Alhussein Fawzi Deep Mind afawzi@google.com Hamza Fawzi Department of Applied Mathematics & Theoretical Physics University of Cambridge h.fawzi@damtp.cam.ac.uk Omar Fawzi ENS de Lyon omar.fawzi@ens-lyon.fr
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any explicit statements or links indicating that open-source code for the described methodology is available.
Open Datasets Yes We now evaluate our bounds on the SVHN dataset [37] which contains color images of house numbers... We now consider the more complex CIFAR-10 dataset [39].
Dataset Splits No The paper specifies training and test image counts for SVHN (73,257 training, 26,032 test) but does not explicitly mention a separate validation split or its size/percentage.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU models or CPU specifications.
Software Dependencies No The paper mentions using a DCGAN model and neural network architectures, but it does not specify any software names with version numbers for reproducibility (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We train a DCGAN [30] generative model on this dataset, with a latent vector dimension d = 100, and further consider several neural networks architectures for classification.