Learning Transferable Adversarial Examples via Ghost Networks

Authors: Yingwei Li, Song Bai, Yuyin Zhou, Cihang Xie, Zhishuai Zhang, Alan Yuille11458-11465

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we give a comprehensive experimental evaluation of the proposed Ghost Networks.
Researcher Affiliation Academia 1Johns Hopkins University 2University of Oxford
Pseudocode No The paper provides mathematical formulations and figures but does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/Li Yingwei/ ghost-network. ... We release source code and provide additional experimental results in https://github.com/Li Yingwei/ghost-network.
Open Datasets Yes We select 5000 images from the ILSVRC 2012 validation set... Imagenet: A large-scale hierarchical image database (Deng et al. 2009). The Neur IPS 2017 Adversarial Challenge also uses Image Net (Deng et al. 2009).
Dataset Splits No The paper uses pre-trained base models and selects 5000 images from the ILSVRC 2012 validation set for testing. It does not provide explicit training/validation/test splits for the models they develop or attack, as their method does not involve training new models from scratch.
Hardware Specification No The paper does not specify the exact hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library names with specific versions like PyTorch 1.x or Python 3.x).
Experiment Setup Yes If not specified otherwise, we follow the default settings in Kurakin, Goodfellow, and Bengio (2017a), i.e., step size α = 1 and the total iteration number N = min(ϵ + 4, 1.25ϵ). We set the maximum perturbation ϵ = 8 (the iteration number N = 10 in this case). For the momentum term, the decay factor μ is set to be 1 as in Dong et al. (2018).