Adversarial Neural Pruning with Latent Vulnerability Suppression
Authors: Divyam Madaan, Jinwoo Shin, Sung Ju Hwang
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our Adversarial Neural Pruning with Vulnerability Suppression (ANP-VS) method on multiple benchmark datasets, on which it not only obtains state-of-the-art adversarial robustness but also improves the performance on clean examples, using only a fraction of the parameters used by the full network. |
| Researcher Affiliation | Collaboration | 1School of Computing, KAIST, South Korea 2School of Electrical Engineering, KAIST, South Korea 3Graduate School of AI, KAIST, South Korea 4AITRICS, South Korea. |
| Pseudocode | Yes | Algorithm 1 Adversarial training by ANP-VS |
| Open Source Code | Yes | the code is available online 2. 2https://github.com/divyam3897/ANP_VS |
| Open Datasets | Yes | 1. MNIST. This dataset (Le Cun, 1998)... 2. CIFAR-10. This dataset (Krizhevsky, 2012)... 3. CIFAR-100. This dataset (Krizhevsky, 2012) |
| Dataset Splits | No | The paper mentions 'training instances' and 'test instances' for the datasets, but does not explicitly provide details for a separate validation split used in their experiments. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions software components like 'Le Net 5-Caffe', 'PyTorch', and 'PGD' but does not specify their version numbers. |
| Experiment Setup | Yes | For MNIST, we consider a Lenet-5-Caffe model with a perturbation radius of ε = 0.3, perturbation per step of 0.01, 20 PGD steps for training, and 40 PGD steps with random restarts for evaluating the trained model. ... We use ε = 0.03, 10 PGD steps for training and 40 steps with random restart for evaluation. |