Rapid Plug-in Defenders

Authors: Kai Wu, yujian li, Jian Lou, Xiaoyu Zhang, Handing Wang, Jing Liu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our evaluation centers on assessing Ce Ta D s effectiveness, transferability, and the impact of different components in scenarios involving one-shot adversarial examples.
Researcher Affiliation Academia Kai Wu Xidian University kwu@xidian.edu.cn Yujian Betterest Li Xidian University bebetterest@outlook.com Jian Lou Zhejiang University jian.lou@hoiying.net Xiaoyu Zhang Xidian University xiaoyuzhang@xidian.edu.cn Handing Wang Xidian University hdwang@xidian.edu.cn Jing Liu Xidian University neouma@mail.xidian.edu.cn
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: See Section 4 and Appendix Section A. Moreover, Code of Ce Ta D is easy to implement.
Open Datasets Yes Three image classification datasets, MNIST [21], CIFAR-10 [20], CIFAR-100 [20], and Imagenet-1k[34], are utilized.
Dataset Splits No For simplicity, the training set only consists of adversarial examples whose number equals to that of the classes, namely one-shot. The paper does not explicitly mention a distinct validation set split.
Hardware Specification Yes CPU, 14 v CPU Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz; GPU, 1 NVIDIA RTX 3090(24GB).
Software Dependencies No Optimization loops are implemented by Py Torch. The implementation of Lion ([6]), the optimizer which we apply, is available at https: //github.com/lucidrains/lion-pytorch. (No specific version numbers for PyTorch or Lion are provided).
Experiment Setup Yes In our default setup, only layer norm parameters (48 parameter groups, 36864 variables in total) are fine-tuned using Lion [6] with default hyper-parameters. We optimize Eq. (2) over 500 epochs with a batch size of 32.