Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Rapid Plug-in Defenders
Authors: Kai Wu, yujian li, Jian Lou, Xiaoyu Zhang, Handing Wang, Jing Liu
NeurIPS 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our evaluation centers on assessing Ce Ta D s effectiveness, transferability, and the impact of different components in scenarios involving one-shot adversarial examples. |
| Researcher Affiliation | Academia | Kai Wu Xidian University EMAIL Yujian Betterest Li Xidian University EMAIL Jian Lou Zhejiang University EMAIL Xiaoyu Zhang Xidian University EMAIL Handing Wang Xidian University EMAIL Jing Liu Xidian University EMAIL |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: See Section 4 and Appendix Section A. Moreover, Code of Ce Ta D is easy to implement. |
| Open Datasets | Yes | Three image classification datasets, MNIST [21], CIFAR-10 [20], CIFAR-100 [20], and Imagenet-1k[34], are utilized. |
| Dataset Splits | No | For simplicity, the training set only consists of adversarial examples whose number equals to that of the classes, namely one-shot. The paper does not explicitly mention a distinct validation set split. |
| Hardware Specification | Yes | CPU, 14 v CPU Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz; GPU, 1 NVIDIA RTX 3090(24GB). |
| Software Dependencies | No | Optimization loops are implemented by Py Torch. The implementation of Lion ([6]), the optimizer which we apply, is available at https: //github.com/lucidrains/lion-pytorch. (No specific version numbers for PyTorch or Lion are provided). |
| Experiment Setup | Yes | In our default setup, only layer norm parameters (48 parameter groups, 36864 variables in total) are fine-tuned using Lion [6] with default hyper-parameters. We optimize Eq. (2) over 500 epochs with a batch size of 32. |