Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains
Authors: Qilong Zhang, Xiaodan Li, YueFeng Chen, Jingkuan Song, Lianli Gao, Yuan He, Hui Xue'
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on coarse-grained and fine-grained domains demonstrate the effectiveness of our proposed methods. |
| Researcher Affiliation | Collaboration | 1University of Electronic Science and Technology of China, China qilong.zhang@std.uestc.edu.cn, jingkuan.song@gmail.com, lianli.gao@uestc.edu.cn 2Alibaba Group, China {fiona.lxd,yuefeng.chenyf,heyuan.hy,hui.xueh}@alibaba-inc.com |
| Pseudocode | No | The paper includes diagrams of the generator structure (e.g., Figure 7) but no explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor any structured code-like text. |
| Open Source Code | Yes | Our code is available at https: //github.com/Alibaba-AAIG/Beyond-Image Net-Attack. |
| Open Datasets | Yes | Our training data is the large-scale Image Net (Russakovsky et al., 2015) training set which includes about 1.2 million 224 224 3 images. |
| Dataset Splits | No | The paper specifies 'training set' for ImageNet and 'Test size' for all datasets in Table 1, but it does not explicitly describe training/validation/test splits or mention a separate validation set for their experiments. |
| Hardware Specification | No | The paper does not specify the exact hardware used for experiments, such as specific GPU or CPU models, or details about the computing environment. |
| Software Dependencies | No | The paper mentions software like 'Torchvision library' and 'Pytorch pre-trained Image Net model' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | Our generator Gθ adopts the same architecture as (Naseer et al., 2019)... We use Adam optimizer (Kingma & Ba, 2015) with a learning rate of 2e-4 and the exponential decay rate for first and second moments is set to 0.5 and 0.999, respectively. All generators are trained for one epoch with the batch size 16. For the layer L, we attack the output of Maxpool.3 for VGG-16 and VGG-19, the output of Conv3 8 for Res-152 and the output of Dense Block.2 for Dense-169... The maximum perturbation ε is set to 10. Follow Lu et al. (2020), we set the step size α = 4 and the number of iterations T = 100 for all iterative methods. For DIM, we set the default decay factor µ = 1.0 and the transformation probability p = 0.7. |