Pre-trained Adversarial Perturbations

Authors: Yuanhao Ban, Yinpeng Dong

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on typical pre-trained vision models and ten downstream tasks demonstrate that our method improves the attack success rate by a large margin compared with state-of-the-art methods.
Researcher Affiliation Collaboration 1 Department of Computer Science & Technology, Institute for AI, BNRist Center, Tsinghua-Bosch Joint ML Center, THBI Lab, Tsinghua University 2 Department of Electronic Engineering, Tsinghua University 3 Real AI
Pseudocode No The paper describes the methods and formulations but does not include structured pseudocode or algorithm blocks that are clearly labeled as such.
Open Source Code Yes Our code is publicly available at https://github.com/banyuanhao/PAP.
Open Datasets Yes We adopt the ILSVRC 2012 dataset [42] to generate PAPs, which are also used to pre-train the models.
Dataset Splits No The paper mentions using a "testing dataset" and lists various datasets like CIFAR10, CIFAR100, etc., which have standard splits, but it does not explicitly provide specific percentages or counts for a validation set split.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. It mentions general terms like "computational resources" but no specifics.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., specific Python, PyTorch, or CUDA versions) needed to replicate the experiment.
Experiment Setup Yes Unless otherwise specified, we choose a batch size of 16 and a step size of 0.0002. All the perturbations should be within the bound of 0.05 under the ℓ norm. We evaluate the perturbations at the iterations of 1, 000, 5, 000, 30, 000, and 60, 000, and report the best performance.