Transferable Adversarial Attacks on SAM and Its Downstream Models

Authors: Song Xia, Wenhan Yang, Yi Yu, Xun Lin, Henghui Ding, LINGYU DUAN, Xudong Jiang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness of the proposed universal meta-initialized and gradient robust adversarial attack (UMI-GRAT) toward SAMs and their downstream models.
Researcher Affiliation Academia 1Nanyang Technological University, 2Pengcheng Laboratory, 3Beihang University, 4Fudan University, 5Peking University
Pseudocode Yes Algorithm 1 Generating adversarial examples by UMI-GRAT
Open Source Code Yes Code is available at https://github.com/xiasong0501/GRAT.
Open Datasets Yes The datasets include: the synapse multi-organ segmentation dataset [29] that contains 3779 abdominal CT scans with 13 types of organs annotated, the ISTD dataset [49] that contains 1870 image triplets of shadow images, the COD10K dataset [14] that contains 5066 camouflaged object images, the CHAMELEON dataset that contains 76 camouflaged images, and the CAMO dataset [30], that contains 1500 camouflaged object images. The natural image dataset D consists of a total of 20,000 images, with 10,000 from Image Net and 10,000 from the SA-1B dataset.
Dataset Splits No The paper lists various datasets used for evaluation but does not provide explicit details about how these datasets were split into training, validation, and test sets for the experiments conducted in the paper.
Hardware Specification Yes We run our experiments for attacking the medical segmentation model using one RTX 4090 GPU with 24 GB memory. We run the rest of the experiments using one RTX A6000 GPU with 48 GB memory.
Software Dependencies No The paper mentions methods like MI-FGSM and PGN, but it does not specify software dependencies such as libraries or programming languages with their corresponding version numbers (e.g., Python 3.x, PyTorch 1.x, TensorFlow 2.x).
Experiment Setup Yes For all methods reported, we set the attack update iterations Ta as 10, with the l bound ϵ = 10 and the step size α = 2. For our UMI, we set the meta iterations Tm = 7, universal step size η = 1. For PGN and BSR, we set the number of examples as 8 for efficiency.