Adversarial Attack Generation Empowered by Min-Max Optimization
Authors: Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we first evaluate the proposed min-max optimization strategy on three attack tasks. We show that our approach leads to substantial improvement compared with state-of-the-art attack methods such as average ensemble PGD [34] and EOT [3, 10, 5]. We also demonstrate the effectiveness of learnable domain weights in guiding the adversary s exploration over multiple domains. 4.1 Experimental setup We thoroughly evaluate our algorithm on MNIST and CIFAR-10. |
| Researcher Affiliation | Collaboration | University of Toronto1, Vector Institute2, Cleveland State University3 Michigan State University4, MIT-IBM Watson AI Lab, IBM Research5 University of California, Irvine6, Syracuse University7 University of Illinois at Urbana-Champaign8 |
| Pseudocode | Yes | Algorithm 1 APGDA to solve problem (4) and Algorithm 2 AMPGD to solve problem (13) |
| Open Source Code | Yes | Code is available at https://github.com/wangjksjtu/minmax-adv. |
| Open Datasets | Yes | We thoroughly evaluate our algorithm on MNIST and CIFAR-10. |
| Dataset Splits | No | The paper mentions MNIST and CIFAR-10 datasets and their use in training, but does not provide specific details on validation dataset splits (percentages or counts). For example, Appendix D.1 'Details on Architectures and Training' mentions training and test data but no explicit validation set details. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as exact GPU or CPU models. It only vaguely mentions 'GPUs' in the acknowledgements: 'Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute.' |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9, and CUDA 11.1'). |
| Experiment Setup | Yes | The adversarial examples are generated by 20-step PGD/APGDA unless otherwise stated (e.g., 50 steps for ensemble attacks). APGDA algorithm is relatively robust and will not be affected largely by the choices of hyperparameters ( , β, γ). The choices of , β, γ for all experiments and more results on CIFAR-10 are provided in Appendix D.2 and Appendix E. |