Generate (Non-Software) Bugs to Fool Classifiers
Authors: Hiromu Yakura, Youhei Akimoto, Jun Sakuma1070-1078
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experimental Results To test our approaches, we first performed a preliminary experiment with a road sign classifier to investigate the feasibility of the proposed methods. Then, we conducted an experiment with an Image Net classifier to confirm the availability of the methods against a wide range of input images and target classes. In both experiments, we compared the patch-based and PEPG-based methods1. |
| Researcher Affiliation | Academia | Hiromu Yakura,1,2 Youhei Akimoto,1,2 Jun Sakuma1,2 1University of Tsukuba, Japan 2RIKEN Center for Advanced Intelligence Project, Japan hiromu@mdl.cs.tsukuba.ac.jp, {akimoto, jun}@cs.tsukuba.ac.jp |
| Pseudocode | Yes | Algorithm 1 PEPG-based adversarial example generation |
| Open Source Code | Yes | The source code for both experiments is available at https:// github.com/hiromu/adversarial examples with bugs. The source code is available at https://github.com/hiromu/ adversarial examples with bugs. |
| Open Datasets | Yes | road sign classifier trained on the German Traffic Sign Recognition Benchmark (Stallkamp et al. 2012), an Inception V3 classifier (Szegedy et al. 2016) pretrained on Image Net, used an image dataset of moths from Costa Rica (Rodner et al. 2015), For the reference audio, we used the VB100 Bird Dataset (Ge et al. 2016), Speech Commands Dataset (Warden 2018) |
| Dataset Splits | No | The paper mentions using pre-trained models and datasets, but does not provide specific details on training, validation, or test splits beyond general dataset names or the source of the pre-trained models. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using WGAN-GP, Inception V3, TensorFlow, and WaveGAN, but it does not specify any version numbers for these software dependencies, which is required for reproducibility. |
| Experiment Setup | No | The paper mentions some general settings like perturbation sizes (32x32, 64x64, 128x128 pixels) and iteration limits for generation, and lists hyperparameters (batch size m, loss importance α, PEPG step size β, initial distribution values μinit, σinit) in Algorithm 1. However, it does not provide concrete numerical values for these hyperparameters or detailed training configurations (e.g., learning rates, optimizer types, number of epochs for the GAN or target models) in the main text. |