Query-efficient Meta Attack to Deep Neural Networks
Authors: Jiawei Du, Hu Zhang, Joey Tianyi Zhou, Yi Yang, Jiashi Feng
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on MNIST, CIFAR10 and tiny-Imagenet demonstrate that our meta-attack method can remarkably reduce the number of model queries without sacrificing the attack performance. |
| Researcher Affiliation | Academia | 1Dept. ECE, National University of Singapore, Singapore 2Re LER, University of Technology Sydney, Australia 3Institute of High performance Computing, A*STAR, Singapore |
| Pseudocode | Yes | Algorithm 1 Meta Attacker Training |
| Open Source Code | Yes | The code of our work is available at https://github.com/dydjw9/Meta Attack_ICLR2020/. |
| Open Datasets | Yes | We evaluate the attack performance on MNIST (Le Cun, 1998) for handwritten digit recognition, CIFAR10 (Krizhevsky & Hinton, 2009) and tiny-Imagenet (Russakovsky et al., 2015) for object classification. |
| Dataset Splits | No | The paper states, 'We use 10000 randomly selected images from the training set to train the meta-attackers in three datasets. The proportion of the selected images to the whole training set are 16%, 20%, and 10% respectively.' and 'we randomly select 1000 images from each dataset as test images.' However, it does not explicitly describe a validation set or its split. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments, such as specific GPU or CPU models. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9'). |
| Experiment Setup | Yes | Meta-training Details For all the experiments, we use the same architecture for the meta attacker A as shown in Table 6. We use Reptile (Nichol et al., 2018) with 0.01 learning rate to train meta attackers. [...] Fine-tuning parameters are set as m = 5 for MNIST and CIFAR10, and m = 3 for tiny-Imagenet. Top q = 128 coordinates are selected as part coordinates for attacker fine-tuning and model attacking on MNIST; and q = 500 on CIFAR10 and tiny-Imagenet. |