Towards Robust Detection of Adversarial Examples
Authors: Tianyu Pang, Chao Du, Yinpeng Dong, Jun Zhu
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply our method to defend various attacking methods on the widely used MNIST and CIFAR-10 datasets, and achieve significant improvements on robust predictions under all the threat models in the adversarial setting. |
| Researcher Affiliation | Academia | Tianyu Pang, Chao Du, Yinpeng Dong, Jun Zhu Dept. of Comp. Sci. & Tech., State Key Lab for Intell. Tech. & Systems BNRist Center, THBI Lab, Tsinghua University, Beijing, China {pty17, du-c14, dyp17}@mails.tsinghua.edu.cn, dcszj@mail.tsinghua.edu.cn |
| Pseudocode | No | The paper describes methods in text and uses mathematical formulas, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a concrete link to source code or explicitly state that source code is available for the methodology described. |
| Open Datasets | Yes | We use the two widely studied datasets MNIST [20] and CIFAR-10 [17]. MNIST is a collection of handwritten digits with a training set of 60,000 images and a test set of 10,000 images. CIFAR-10 consists of 60,000 color images in 10 classes with 6,000 images per class. There are 50,000 training images and 10,000 test images. |
| Dataset Splits | No | The paper specifies training and test set sizes for MNIST and CIFAR-10, but it does not explicitly mention a separate validation set split or how data was partitioned for validation. |
| Hardware Specification | No | The paper mentions funding from "NVIDIA NVAIL Program, and the projects from Siemens and Intel" in the acknowledgements, but it does not specify the exact hardware (e.g., specific GPU/CPU models, memory details) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names like PyTorch or TensorFlow with their respective versions) needed to replicate the experiments. |
| Experiment Setup | Yes | For each network, we use both the CE and RCE as the training objectives, trained by the same settings as He et al. [16]. The number of training steps for both objectives is set to be 20,000 on MNIST and 90,000 on CIFAR-10. The pixel values of images in both datasets are scaled to be in the interval [ 0.5, 0.5]. |