Beyond Pretrained Features: Noisy Image Modeling Provides Adversarial Defense

Authors: Zunzhi You, Daochang Liu, Bohyung Han, Chang Xu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that, in terms of adversarial robustness, NIM is superior to MIM thanks to its effective denoising capability. In this section, we empirically show the effectiveness of NIM as a self-supervised pretraining method that can bring adversarial robustness via the De3 method.
Researcher Affiliation Academia Zunzhi You1, Daochang Liu1, Bohyung Han2, Chang Xu1 1 School of Computer Science, University of Sydney 2 ECE & IPAI, Seoul National University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Source code and models are available at https://github.com/youzunzhi/NIM-AdvDef.
Open Datasets Yes We conduct all the experiments on Image Net-1K [11] dataset. [11] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248 255. Ieee, 2009.
Dataset Splits Yes We use the training set (1.28 million images) in pretraining and finetuning and use the validation set (50,000 images) for evaluation.
Hardware Specification No The paper does not provide specific details regarding the hardware used for running the experiments (e.g., GPU/CPU models, memory, or cloud instances).
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python version, library versions like PyTorch or TensorFlow).
Experiment Setup Yes For PGD, we set the number of steps n = 10 and step size α = 2/255. Our default models are pretrained with σ Γ(25, 3) and finetuned on denoised images of σ U(0, 30). All models are pretrained for 800 epochs and then finetuned for 100 epochs.