Adversarial Examples Are Not Real Features

Authors: Ang Li, Yifei Wang, Yiwen Guo, Yisen Wang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we re-examine the theory from a larger context by incorporating multiple learning paradigms. Notably, we find that contrary to their good usefulness under supervised learning, non-robust features attain poor usefulness when transferred to other self-supervised learning paradigms, such as contrastive learning, masked image modeling, and diffusion models. It reveals that non-robust features are not really as useful as robust or natural features that enjoy good transferability between these paradigms. ... We include two commonly adopted datasets in our study, CIFAR10 [21] and Tiny-Image Net-200 [49]. ... Table 1: Evaluation of relative usefulness of robust features and non-robust features on four learning paradigms: MIM, CL, DM, and SL.
Researcher Affiliation Academia Ang Li1 Yifei Wang2 Yiwen Guo3 Yisen Wang4,5 1 School of Electronics Engineering and Computer Science, Peking University 2 School of Mathematical Sciences, Peking University 3 Independent Researcher 4 National Key Lab of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University 5 Institute for Artificial Intelligence, Peking University
Pseudocode No The paper describes methods verbally and through mathematical equations but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/PKU-ML/Adv Not Real Features.
Open Datasets Yes We include two commonly adopted datasets in our study, CIFAR10 [21] and Tiny-Image Net-200 [49].
Dataset Splits No The paper mentions using CIFAR10 and Tiny-Image Net-200 and evaluating performance on their respective test sets. However, it does not explicitly provide the specific percentages or counts for training, validation, or test splits, nor does it cite a reference for standard splits used.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only mentions general aspects of training and evaluation.
Software Dependencies No The paper mentions software components like ResNet-18, MAE, DDPM, SimCLR models, and the Kornia package, but it does not specify version numbers for these or other general software dependencies like Python or deep learning frameworks (e.g., PyTorch, TensorFlow).
Experiment Setup Yes A Experimental Details ... Table 3: Experimental configurations in dataset construction. ... Table 4: The hyper-parameter settings of pre-training on the three versions of CIFAR10. ... Table 5: The hyper-parameter settings of pre-training on the three versions of Tiny-Image Net-200. ... Table 6: Hyper-parameter configuration of linear probing.