Reverse Engineering of Imperceptible Adversarial Image Perturbations

Authors: Yifan Gong, Yuguang Yao, Yize Li, Yimeng Zhang, Xiaoming Liu, Xue Lin, Sijia Liu

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness of CDD-RED under different evaluation metrics (ranging from the pixel-level, prediction-level to the attribution-level alignment) and a variety of attack generation methods (e.g., FGSM, PGD, CW, Auto Attack, and adaptive attacks).
Researcher Affiliation Collaboration Yifan Gong1 , Yuguang Yao2 , Yize Li1, Yimeng Zhang2, Xiaoming Liu2, Xue Lin1, Sijia Liu2,3 1 Northeastern University, 2 Michigan State University, 3 MIT-IBM Watson AI Lab, IBM Research
Pseudocode No The paper describes algorithms and formulations (e.g., training objectives and equations) but does not provide any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Codes are available at link. Code is available at: https://github.com/Northeastern-NLP/RED
Open Datasets Yes To train and test RED models, we generate adversarial examples on the Image Net dataset (Deng et al., 2009).
Dataset Splits Yes As a consequence, 45K (3 × 5 × 3K) adversarial images are generated in total: 37.5K for training and 7.5K for validating.
Hardware Specification Yes We measure the computation cost on a single RTX Titan GPU.
Software Dependencies No The paper mentions specific models (VGG19, DnCNN) and optimizers (Adam) with citations, but it does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes The RED problem is solved using an Adam optimizer (Kingma & Ba, 2015) with the initial learning rate of 10-4, which decays 10 times for every 140 training epochs. In (4), the regularization parameter λ is set as 0.025. The maximum number of training epochs is set as 300.