Towards Lightweight Black-Box Attack Against Deep Neural Networks
Authors: Chenghao Sun, Yonggang Zhang, Wan Chaoqun, Qizhou Wang, Ya Li, Tongliang Liu, Bo Han, Xinmei Tian
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct extensive experiments to verify the power of lightweight black-box attacks augmenting with ETF. |
| Researcher Affiliation | Collaboration | Chenghao Sun1 Yonggang Zhang2 Wan Chaoqun3 Qizhou Wang2 Ya Li4 Tongliang Liu5 Bo Han2 Xinmei Tian1 1University of Science and Technology of China 2Hong Kong Baptist University 3Alibaba Cloud Computing Ltd 4i Flytek Research 5The University of Sydney |
| Pseudocode | No | The paper describes its methods in prose and mathematical formulations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Code is available at https://github.com/sunch-ustc/Error_Trans Former/tree/ETF |
| Open Datasets | Yes | 7 models trained on the Image Net dataset [47] |
| Dataset Splits | Yes | the lightweight surrogate models adopt only 1, 000 images randomly sampled from the validation set of Image Net |
| Hardware Specification | No | The paper does not provide specific details on the hardware used, such as GPU or CPU models, memory, or specific cloud instance types. It only implicitly refers to running computations without specifying the hardware. |
| Software Dependencies | No | The paper mentions applying 'classic methods, e.g., PGD [40], MI [11], DI [56], and TI [12]' but does not specify software dependencies with version numbers (e.g., Python version, library versions like PyTorch, TensorFlow, scikit-learn). |
| Experiment Setup | Yes | The batch size is 128, the epoch is 500, and the initial learning rate is 0.4, linearly decreasing to 0.008. |