Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks

Authors: Wei-An Lin, Chun Pong Lau, Alexander Levine, Rama Chellappa, Soheil Feizi

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically show that either standard adversarial training or on-manifold adversarial training alone does not provide sufficient robustness, while DMAT achieves improved robustness against unseen attacks.Table 1 presents classification accuracies for different adversarial training methods against standard and on-manifold adversarial attacks.
Researcher Affiliation Academia Wei-An Lin University of Maryland walin@umd.edu Chun Pong Lau Johns Hopkins University clau13@jhu.edu Alexander Levine University of Maryland alevine0@cs.umd.edu Rama Chellappa Johns Hopkins University rchella4@jhu.edu Soheil Feizi University of Maryland sfeizi@cs.umd.edu
Pseudocode No The paper describes its methods but does not include any structured pseudocode or algorithm blocks.
Open Source Code No Codes and models will be available in this link.
Open Datasets Yes Our OM-Image Net is build upon the Mixed-10 dataset introduced in the robustness library [39], which consists of images from 10 superclasses of Image Net.
Dataset Splits No The paper mentions training (Dtr = 69,480 samples) and test sets (Dte = 7,200 samples) but does not explicitly specify a separate validation dataset split.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU specifications, or cloud computing resources.
Software Dependencies No The paper mentions software like StyleGAN and the robustness library but does not provide specific version numbers for these or other underlying software dependencies (e.g., Python, PyTorch/TensorFlow, CUDA versions).
Experiment Setup Yes During training, we use the PGD-5 threat model in the image space for (8), whereas for (9) we consider OM-FGSM and OM-PGD-5 as the threat models. For completeness, we also consider robust training using TRADES (β = 6) [43] in the image space using the PGD-5 threat model. All the models are trained by the SGD optimizer with the cyclic learning rate scheduling strategy in [44], momentum 0.9, and weight decay 5 10 4 for a maximum of 20 epochs.