Learning to Attack: Adversarial Transformation Networks

Authors: Shumeet Baluja, Ian Fischer

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that it is possible, and that the generated attacks yield startling insights into the weaknesses of the target network. We demonstrate ATNs on both simple MNISTdigit classifiers and state-of-the-art Image Net classifiers deployed by Google, Inc.: Inception Res Net-v2.
Researcher Affiliation Industry Shumeet Baluja, Ian Fischer Google Research Google, Inc.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes To begin our empirical exploration, we train five networks on the standard MNIST digit classification task (Le Cun, Cortes, and Burges 1998). We explore the effectiveness of ATNs on Image Net (Deng et al. 2009), which consists of 1.2 million natural images categorized into 1 of 1000 classes.
Dataset Splits Yes The target classifier, f, used in these experiments is a pre-trained state-of-the-art classifier released by Google, Inc.: Inception Res Net v2 (IR2), that has a top-1 single-crop error rate of 19.9% on the 50,000 image validation set, and a top-5 error rate of 4.9%.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions 'Tensor Flow defaults' and 'Adam optimizer' but does not provide specific version numbers for software dependencies.
Experiment Setup Yes In the MNIST experiments, we empirically set α = 1.5. We explore three values of β to balance the two loss functions. We set the learning rate to 0.0001, α = 1.5, and β = 0.01 for all of the networks trained. All runs were trained for 0.1 epochs (6400 steps) on shuffled training set images, using the Adam optimizer and Tensor Flow defaults.