Exploring Non-target Knowledge for Improving Ensemble Universal Adversarial Attacks

Authors: Juanjuan Weng, Zhiming Luo, Zhun Zhong, Dazhen Lin, Shaozi Li

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments results validate that considering the non-target KL loss can achieve superior transferability than the original KL loss by a large margin, and the min-max training can provide a mutual benefit in adversarial ensemble attacks. The source code is available at: https://github.com/WJJLL/ND-MM.
Researcher Affiliation Academia Juanjuan Weng1, Zhiming Luo1,3*, Zhun Zhong2, Dazhen Lin1, Shaozi Li1 1Department of Artificial Intelligence, Xiamen University, China 2Department of Information Engineering and Computer Science, University of Trento, Italy 3Fujian Key Laboratory of Big Data Application and Intellectualization for Tea Industry, Wuyi University, China
Pseudocode No The paper describes the mathematical formulations and optimization steps but does not include a structured pseudocode or algorithm block.
Open Source Code Yes The source code is available at: https://github.com/WJJLL/ND-MM.
Open Datasets Yes In this section, we conduct experiments on the Image Net dataset (Deng et al. 2009) to evaluate the effectiveness of the proposed method on the non-targeted and targeted attacks.
Dataset Splits Yes For training and testing the UAPs, we randomly select 50k images from the Image Net training set for training, and evaluate the attacking performance on Image Net validation set (50k images).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not specify version numbers for any software dependencies, libraries, or programming languages used in the experiments.
Experiment Setup Yes In the training phase, the hyperparameters are set as follows: the number of classifier models K = 3, the batch size N = 20, the number of training epochs T = 5, the learning rate of inner maximization α = 0.003. For other parameters, we follow the settings in (Zhang et al. 2020b), i.e., the perturbation magnitude ϵ = 10, and the initial learning rate of the Adam optimizer (outer minimization) β = 0.005.