Online Adversarial Purification based on Self-supervised Learning
Authors: Changhao Shi, Chester Holtz, Gal Mishne
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that SOAP performs competitively on various architectures across different datasets with only a small computation overhead compared with vanilla training. We evaluate SOAP on the MNIST, CIFAR10 and CIFAR100 datasets following Madry et al. (2017). |
| Researcher Affiliation | Academia | Changhao Shi1, Chester Holtz2 & Gal Mishne1,2,3 1Department of Electrical and Computer Engineering, 2Department of Computer Science and Engineering, 3The Halıcıo glu Data Science Institute University of California, San Diego {cshi,chholtz,gmishne}@ucsd.edu |
| Pseudocode | Yes | Algorithm 1 PGD attack |
| Open Source Code | Yes | Code is available at https://github.com/Mishne-Lab/SOAP. |
| Open Datasets | Yes | We evaluate SOAP on the MNIST, CIFAR10 and CIFAR100 datasets following Madry et al. (2017). MNIST (Le Cun et al., 1998). CIFAR10 & CIFAR100 (Krizhevsky & Hinton, 2009). |
| Dataset Splits | No | The paper implicitly uses standard train/test splits for MNIST, CIFAR10, and CIFAR100 by stating 'following Madry et al. (2017)'. However, it does not explicitly describe a validation data split (percentages, counts, or specific methodology) from the training data. |
| Hardware Specification | Yes | To measure the training complexity, we run each training method for 30 epochs on a single Nvidia Quadro RTX 8000 GPU, and report the average epoch time consumption. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies such as libraries or frameworks (e.g., PyTorch version, Python version, CUDA version). |
| Experiment Setup | Yes | FCN is trained for 100 epochs with an initial learning rate of 0.01 and CNN for 200 epochs with an initial learning rate of 0.1 using SGD. The learning rate is decreased 10 times at halfway in both cases. The batch size is 128. SOAP runs T = 5 iterations with step size γ = 0.1. |