A New Defense Against Adversarial Images: Turning a Weakness into a Strength
Authors: Shengyuan Hu, Tao Yu, Chuan Guo, Wei-Lun Chao, Kilian Q. Weinberger
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test our detection mechanism against the white-box attack defined in section 4.3 in several different settings, and release our code publicly for reproducibility4. Datasets and target models. We conduct our empirical studies on Image Net [11] and CIFAR-10 [26]. |
| Researcher Affiliation | Academia | Equal Contribution. Department of Computer Science, Cornell University. Department of Computer Science and Engineering, The Ohio State University. Email: {ty367, sh797, cg563, kqw4}@cornell.edu, chao.209@osu.edu. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We test our detection mechanism against the white-box attack defined in section 4.3 in several different settings, and release our code publicly for reproducibility4. ... 4https://github.com/s-huu/Turning Weakness Into Strength |
| Open Datasets | Yes | We conduct our empirical studies on Image Net [11] and CIFAR-10 [26]. |
| Dataset Splits | Yes | We sample 1,000 images from Image Net (validation) and CIFAR-10 (test): each class has 1 or 100 images. |
| Hardware Specification | No | No specific hardware details like GPU/CPU models, processor types, or memory amounts are provided for the experimental setup. |
| Software Dependencies | No | The paper mentions 'Py Torch' and the optimizer 'Adam [25]' but does not specify their version numbers, which are necessary for reproducible software dependencies. |
| Experiment Setup | Yes | We optimize the adversarial loss L for each of them using Adam [25] with learning rate 0.005 for a maximum of 400 steps to construct the adversarial images. The L -bound for all attacks is set to τ = 0.1. We set λ = 2 (cf. Equation 4) for Image Net and λ = 3 for CIFAR-10. we fix to a reasonable value of 50 steps for Image Net ... and 200 steps for CIFAR-10. train a VGG-19 model [43] with a dropout rate of 0.5 [46]. |