Learning Transferable Adversarial Perturbations

Authors: Krishna kanth Nakka, Mathieu Salzmann

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that our approach outperforms the state-of-the-art universal and transferable attack strategies. and We evaluate the effectiveness of our attack strategy in diverse settings.
Researcher Affiliation Collaboration Krishna Kanth Nakka1, Mathieu Salzmann1,2 1CVLab, EPFL, Switzerland 2Clear Space, Switzerland
Pseudocode Yes Algorithm 1 Training a transferable adversarial perturbation generator
Open Source Code Yes Our code is available at https://github.com/krishnakanthnakka/Transferable_ Perturbations.
Open Datasets Yes To train the generator, similarly to [11], we use data from either Image Net [26], Comics [27], Paintings [28] or Chest X [29] as source domain, containing 1.2M, 40K, 80K, and 8K images, respectively.
Dataset Splits Yes We then randomly select 5000 images from the Image Net [26] validation set as target domain to evaluate the transferability of our attacks.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions using Py Torch [30] but does not provide specific version numbers for PyTorch or any other software libraries or solvers used in the experiments.
Experiment Setup Yes To train them, we use the Adam optimizer with a learning rate of 2e 4 and a batch size of 16. and For all our experiments, we set the layer l to attack to relu after conv4-1, layer3, mixed6b, denseblock8, and fire10 for VGG16, Res Net152, Inception-v3, Dense Net121, and Squeeze Net, respectively.