Perceptual-Sensitive GAN for Generating Adversarial Patches

Authors: Aishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, Dacheng Tao1028-1035

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments under semi-whitebox and black-box settings on two large-scale datasets GTSRB and Image Net demonstrate that the proposed PS-GAN outperforms state-of-the-art adversarial patch attack methods.
Researcher Affiliation Academia State Key Laboratory of Software Development Environment, Beihang University, China Department of Computer Science and Technology, University of Cambridge, UK UBTECH Sydney AI Centre, SIT, FEIT, University of Sydney, Australia {liuaishan, xlliu, jxfan, mayuqing, zal1506}@buaa.edu.cn, hx255@cam.ac.uk, dacheng.tao@sydney.edu.au
Pseudocode Yes Algorithm 1 Perceptual-Sensitive Generative Adversarial Network (PS-GAN).
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes Extensive experiments are conducted on GTSRB (Houben et al. 2008) and Image Net (Deng et al. 2009)... we choose Quick Draw (J. Jongejan and Fox-Gieg. 2016) as the corresponding patch dataset.
Dataset Splits Yes Each image and patch is normalized to [ 1, 1] and scaled to 128x128x3 and 16x16x3, respectively.
Hardware Specification Yes In our experiments, we use Tensorflow and Keras for the implementation and test them on a NVIDIA Tesla K80 GPU cluster.
Software Dependencies No The paper mentions 'Tensorflow and Keras' but does not specify their version numbers, which are required for a reproducible description of software dependencies.
Experiment Setup Yes We train PS-GAN for 250 epochs with a batch size of 64, with the learning rate of 0.0002, decreased by 10% every 900 steps. As for the hyperparameters in loss function, we set λ range from 0.002 to 0.005 and γ to 1.0 and δ to 0.0001, respectively.