Interpreting Adversarially Trained Convolutional Neural Networks
Authors: Tianyuan Zhang, Zhanxing Zhu
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We design systematic approaches to interpret AT-CNNs in both qualitative and quantitative ways and compare them with normally trained models. |
| Researcher Affiliation | Academia | 1School of EECS, Peking University, China 2School of Mathematical Sciences, Peking University, China 3Center for Data Science, Peking University 4Beijing Institute of Big Data Research. |
| Pseudocode | No | No pseudocode or algorithm blocks are present in the paper. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | Three image datasets are considered, including Tiny Image Net1, Caltech-256 (Griffin et al., 2007) and CIFAR-10. 1https://tiny-imagenet.herokuapp.com/ |
| Dataset Splits | Yes | Tiny Image Net has 200 classes of objects. Each class has 500 training images, 50 validation images, and 50 test images. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers (e.g., specific library versions or programming language versions). |
| Experiment Setup | Yes | When training on CIFAR-10, we use the Res Net-18 model (He et al., 2016a;b); for data augmentation, we perform zero paddings with width as 4, horizontal flip and random crop. For both Tiny Image Net and Caltech-256, we use Res Net-18 model as the network architecture. We use PGD based adversarial training with bounded l and l2 norm constraints. We also investigate FGSM (Goodfellow et al., 2014) based adversarial training. |