CAG: A Real-Time Low-Cost Enhanced-Robustness High-Transferability Content-Aware Adversarial Attack Generator

Authors: Huy Phan, Yi Xie, Siyu Liao, Jie Chen, Bo Yuan5412-5419

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on different datasets and DNN models have verified the realtime, low-cost, enhanced-robustness, and high-transferability benefits of CAG.
Researcher Affiliation Collaboration 1Rutgers University School of Engineering 2MIT-IBM Watson AI Lab, IBM Research
Pseudocode Yes Algorithm 1: CAG Training Algorithm
Open Source Code No The paper mentions using Fool Box in Py Torch for generating adversarial examples but does not provide a link or statement about open-sourcing the code for CAG itself.
Open Datasets Yes To evaluate the effectiveness of CAG, we conduct extensive experiments on CIFAR-10 (Krizhevsky, Hinton, and others 2009) and Image Net (Deng et al. 2009) dataset.
Dataset Splits Yes The classification accuracy on clean images achieves 93.48% for 10,000 validation images. The ASR for 10,000 validation images (only 1000 images for C&W) targeted on random incorrect classes are reported. 10,000 benign images are randomly picked from the validation dataset to do the evaluation.
Hardware Specification Yes Our experiments are performed on NVIDIA Tesla V100 GPU.
Software Dependencies No The paper mentions 'Py Torch' and 'Fool Box' but does not provide specific version numbers for these software components.
Experiment Setup Yes We set β = 3 for both datasets. The initial learning rate is set to 5e 2 and gradually decayed to 1e 6 using a cosine annealing curve. On CIFAR-10, the CAG is trained for a total of 500 epochs using the batch size of 256. On Imaget Net, we train the CAG for 20 epochs with batch size of 64. The L2 norm of adversarial perturbations is set to 0.1 for both datasets.