FACL-Attack: Frequency-Aware Contrastive Learning for Transferable Adversarial Attacks

Authors: Hunmin Yang, Jongoh Jeong, Kuk-Jin Yoon

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments Experimental Setup Datasets and attack settings. We evaluate our method over challenging strict black-box settings (i.e., cross-domain and cross-model) in the image classification task. We set the target domain and victim model to be different from the source domain and surrogate model. The perturbation generator is trained on Image Net-1K (Russakovsky et al. 2015) and evaluated on CUB-201-2011 (Wah et al. 2011), Stanford Cars (Krause et al. 2013), and FGVC Aircraft (Maji et al. 2013).
Researcher Affiliation Academia Hunmin Yang1,2,*, Jongoh Jeong1,*, Kuk-Jin Yoon1 1Visual Intelligence Lab., KAIST 2Agency for Defense Development {hmyang, jeong2, kjyoon}@kaist.ac.kr
Pseudocode No The paper does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the described methodology.
Open Datasets Yes The perturbation generator is trained on Image Net-1K (Russakovsky et al. 2015) and evaluated on CUB-201-2011 (Wah et al. 2011), Stanford Cars (Krause et al. 2013), and FGVC Aircraft (Maji et al. 2013).
Dataset Splits Yes Dataset # Class # Train / Val. Resolution Image Net-1K 1,000 1.28 M / 50,000 224 224 CUB-200-2011 200 5,994 / 5,794 448 448 Stanford Cars 196 8,144 / 8,041 448 448 FGVC Aircraft 100 6,667 / 3,333 448 448
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using an 'Adam optimizer' but does not specify software versions for programming languages, libraries (e.g., PyTorch, TensorFlow), or other dependencies.
Experiment Setup Yes We train with an Adam optimizer (β1 = 0.5, β2 = 0.999) (Kingma and Ba 2015) with the learning rate of 2 10 4, and the batch size of 16 for 1 epoch. The perturbation budget for crafting the adversarial image is l 10. For the FADR hyper-parameters, we follow a prior work (Huang et al. 2021) to set the low and high frequency threshold to fl = 7 and fh = 112, respectively. We use ρ = 0.01 and σ = 8 for spectral transformation and describe more details in Supplementary.