DataFreeShield: Defending Adversarial Attacks without Training Data
Authors: Hyeyoon Lee, Kanghyun Choi, Dain Kwon, Sunjong Park, Mayoore Selvarasa Jaiswal, Noseong Park, Jonghyun Choi, Jinho Lee
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive validation, we show that Data Free Shield outperforms baselines, demonstrating that the proposed method sets the first entirely data-free solution for the adversarial robustness problem. |
| Researcher Affiliation | Collaboration | 1Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea 2NVIDIA, Work done while at IBM 3School of Computing, KAIST, Daejeon, South Korea. |
| Pseudocode | Yes | Appendix B. Overall Procedure of Data Free Shield. Algorithm 1 Procedure of Data Free Shield. |
| Open Source Code | Yes | The code used for the experiment is included in a zip archive in the supplementary material, along with the script for reproduction. The code is under Nvidia Source Code License-NC and GNU General Public License v3.0. |
| Open Datasets | Yes | We use total of four datasets: Med MNIST-v2 as medical datasets (Yang et al., 2023), SVHN (Netzer et al., 2011), CIFAR-10, and CIFAR-100 (Krizhevsky et al., 2009). |
| Dataset Splits | No | The paper mentions using specific datasets for training and testing, and also refers to common practices in adversarial training (e.g., 'PGD-10'). However, it does not explicitly state the specific percentages or counts used for train, validation, and test splits within these datasets to enable direct reproduction of the data partitioning. |
| Hardware Specification | Yes | All experiments have been conducted using Py Torch 1.9.1 and Python 3.8.0 running on Ubuntu 20.04.3 LTS with CUDA version 11.1 using RTX3090 and A6000 GPUs. |
| Software Dependencies | Yes | All experiments have been conducted using Py Torch 1.9.1 and Python 3.8.0 running on Ubuntu 20.04.3 LTS with CUDA version 11.1 using RTX3090 and A6000 GPUs. |
| Experiment Setup | Yes | For adversarial training, we used SGD optimizer with learning rate=1e-4, momentum=0.9, and batch size of 200 for 100 epochs, and 200 epochs for Res Net-20 and Res Net-18. All adversarial perturbations were created using PGD-10 (Madry et al., 2018) with the specified ϵ-bounds. For LDF Shield, we simply use λ1 = 1 and λ2 = 1... For Grad Refine, we use B = {10, 20} for all settings... We use τ = 0.5 for all our experiments with Grad Refine. |