Optimal transport based adversarial patch to leverage large scale attack transferability
Authors: Pol Labarbarie, Adrien CHAN-HON-TONG, Stéphane Herbin, Milad Leyli-abadi
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through digital experiments conducted on Image Net-1K, we provide evidence that our new patches are the only ones that can simultaneously influence multiple Transformer models and Convolutional Neural Networks. Physical world experiments demonstrate that our patch can affect systems in deployment without explicit knowledge. This section evaluates our APA through digital, hybrid and physical world experiments. |
| Researcher Affiliation | Collaboration | Pol Labarbarie1,2 Adrien Chan-Hon-Tong2 St ephane Herbin2 Milad Leyli-Abadi1 1IRT-System X, 2ONERA/DTIS, University Paris-Saclay Palaiseau, France {firstname.name}@irt-systemx.fr {firstname.name}@onera.fr |
| Pseudocode | No | The paper describes the methodology using text and mathematical equations but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not explicitly state that the source code for the described methodology is publicly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Image Net-1K (Deng et al., 2009) |
| Dataset Splits | Yes | We split the Image Net-1K validation set into a training set of 40000 images on which we train patches and a test set of 10000 images on which we evaluate their impact. |
| Hardware Specification | Yes | For the training of each patch on medium and large models we consider a single NVIDIA V100-32G or a single NVIDIA A100 respectively. and Each run is launched on the same setup composed by a single NVIDIA A100. |
| Software Dependencies | No | The paper mentions 'Py Torch library (Paszke et al., 2019)' but does not provide specific version numbers for PyTorch or other key software dependencies. |
| Experiment Setup | Yes | The patch optimization is performed using 100 epochs (1 epoch equals 1000 iterations) with a batch size of 50 images and for three different learning rates (0.1, 0.5, 1). We choose for our method p = 2 and K = 500 (reasons are explained in Appendix I). |