GAMA: Generative Adversarial Multi-Object Scene Attacks

Authors: Abhishek Aich, Calvin-Khang Ta, Akash Gupta, Chengyu Song, Srikanth Krishnamurthy, Salman Asif, Amit Roy-Chowdhury

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental GAMA triggers 16% more misclassification than state-of-the-art generative approaches in black-box settings where both the classifier architecture and data distribution of the attacker are different from the victim. Our code is available here: https://abhishekaich27.github.io/gama.html. Our extensive experiments on various black-box settings (where victims are multi-label/single-label classifiers and object detectors) show GAMA s state-of-the-art transferability of perturbations (Table 2, 3, 5, 4, 6, and 7).
Researcher Affiliation Collaboration Abhishek Aich , Calvin-Khang Ta , Akash Gupta, Chengyu Song, Srikanth V. Krishnamurthy, M. Salman Asif, Amit K. Roy-Chowdhury University of California, Riverside, CA, USA. AG is currently with Vimaan AI, USA.
Pseudocode Yes Algorithm 1: GAMA pseudo-code
Open Source Code Yes Our code is available here: https://abhishekaich27.github.io/gama.html. The code has been released here: https://github.com/abhishekaich27/GAMA-pytorch
Open Datasets Yes We use the multi-label datasets PASCAL-VOC [78] and MS-COCO [79] to train generators for the baselines and our method.
Dataset Splits Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] . We provide these details in the supplementary material. (Additionally, the paper mentions evaluation on '50K validation set' of ImageNet, which implies a specific data split for evaluation).
Hardware Specification Yes Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] . We provide these details in the supplementary material.
Software Dependencies Yes For the CLIP model, we use the Vi T-B/16 framework [36]. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] . We provide these details in the supplementary material. (This typically includes software environment details).
Experiment Setup Yes Unless otherwise stated, perturbation budget is set to ℓ 10 for all experiments. We chose the following surrogate models f( ) (Pascal-VOC or MSCOCO pre-trained multi-label classifiers): Res Net152 (Res152) [80], Dense Net169 (Den169) [81], and VGG19 [64]. For the CLIP model, we use the Vi T-B/16 framework [36]. See supplementary material for more training details.