Practical No-box Adversarial Attacks against DNNs
Authors: Qizhang Li, Yiwen Guo, Hao Chen
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that adversarial examples crafted on prototypical auto-encoding models transfer well to a variety of image classification and face verification models. |
| Researcher Affiliation | Collaboration | Qizhang Li Byte Dance AI Lab liqizhang@bytedance.com Yiwen Guo Byte Dance AI Lab guoyiwen.ai@bytedance.com Hao Chen University of California, Davis chen@ucdavis.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is publicly available at: https://github.com/qizhangli/nobox-attacks. |
| Open Datasets | Yes | For image classification, we crafted adversarial examples based on benign Image Net images [38]... For face verification, we first attacked open-source models on the LFW dataset [18]... Face Net [40] ... and Cosface [52] ... both trained on the CAISA-Web Face dataset [55]. |
| Dataset Splits | No | The paper mentions using the ImageNet validation set as part of the data for crafting adversarial examples, and implies some validation for early stopping during substitute model training ("Training could stop early if a performance plateau was reached on each tiny training set"), but does not provide specific train/validation/test dataset splits (e.g., percentages or counts) for their own model training process. |
| Hardware Specification | Yes | All our experiments were performed on one NVIDIA Tesla-V100 GPU using Py Torch [36] implementations. |
| Software Dependencies | No | The paper mentions using PyTorch but does not specify a version number or other software dependencies with version numbers. |
| Experiment Setup | Yes | Models were all trained for at most 15 000 iterations using ADAM [24] with a fixed learning rate of 0.001. ... λ is simply set as 1 for all experiments. We let the optimization step-size of I-FGSM be 1/255 for both Image Net and LFW, following a bunch of prior work. |