CD-UAP: Class Discriminative Universal Adversarial Perturbation

Authors: Chaoning Zhang, Philipp Benz, Tooba Imtiaz, In-So Kweon6754-6761

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed approach has been evaluated with extensive experiments on various benchmark datasets. Additionally, our proposed approach achieves state-of-the-art performance for the original task of UAP attacking all classes, which demonstrates the effectiveness of our approach.
Researcher Affiliation Academia Chaoning Zhang,* Philipp Benz,* Tooba Imtiaz, In-So Kweon Korea Advanced Institute of Science and Technology (KAIST), South Korea
Pseudocode Yes Algorithm 1: Class Discriminative Universal Perturbation Generation
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The proposed approach has been evaluated with extensive experiments on various benchmark datasets: CIFAR10, CIFAR100 and Image Net. For CIFAR10 and CIFAR100 (Krizhevsky, Hinton, and others 2009) experiments, and 32 for experiments on Image Net (Deng et al. 2009).
Dataset Splits No In all our experiments, we train the CD-UAP on the training dataset. Specifically, we only use the initially correctly classified samples in the training dataset. The generated CD-UAP is evaluated on the test dataset. There is no explicit mention of a 'validation' split with specific details.
Hardware Specification No The paper mentions leveraging "the power of parallel computing devices, such as GPUs" but does not specify particular GPU models, CPU models, or detailed hardware specifications used for the experiments.
Software Dependencies No All experiments are conducted with the Py Torch framework. We empirically found that the widely used ADAM (Kingma and Ba 2014; Reddy Mopuri, Krishna Uppala, and Venkatesh Babu 2018) optimizer converges faster than standard SGD. No specific version numbers are provided for PyTorch or any other software/libraries.
Experiment Setup Yes For CIFAR and Image Net datasets, we deploy the l-norm on δ with ϵ = 10 and ϵ = 15, respectively, for natural images in the range of [0, 255]. As discussed earlier, we use the ADAM optimizer for all experiments, setting the batch size to 128 for CIFAR10 and CIFAR100 (Krizhevsky, Hinton, and others 2009) experiments, and 32 for experiments on Image Net (Deng et al. 2009).