Adversarial Parameter Attack on Deep Neural Networks
Authors: Lijia Yu, Yihan Wang, Xiao-Shan Gao
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, algorithms are given which can be used to compute adversarial parameters and numerical experiments are used to demonstrate that the algorithms are effective to produce high-quality adversarial parameters for frequently-used networks like VGG, deep VGG (Simonyan & Zisserman, 2014), Res Net (He et al., 2016), Wide-Res Net on the frequently-used datasets like CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009) and Tiny-Image Net (Le & Yang, 2015). |
| Researcher Affiliation | Academia | 1Academy of Mathematics and Systems Science, Chinese Academy of Sciences 2University of Chinese Academy of Sciences. |
| Pseudocode | Yes | Algorithm 1 Adversarial Parameter Attack under L norm. Algorithm 2 Adversarial Parameter Attack under L0 norm. |
| Open Source Code | Yes | The codes of the experiments can be found in https://github.com/ Ehan W/adversarial-parameter-attack. |
| Open Datasets | Yes | The datasets include CIFAR-10, CIFAR100 (Krizhevsky et al., 2009), and Tiny-Image Net (Le & Yang, 2015). |
| Dataset Splits | No | The paper mentions using a 'training set' and 'test set' but does not explicitly provide details about a validation set, its size, or the splitting methodology for validation data. |
| Hardware Specification | Yes | The GPU we used in the experiment is NVIDIA Ge Force RTX 3090. |
| Software Dependencies | No | The paper discusses various models and attack methods (e.g., PGD, Autoattack) but does not explicitly list the specific versions of software dependencies such as programming languages, libraries (e.g., PyTorch, TensorFlow), or other frameworks used for the experiments. |
| Experiment Setup | Yes | Training Details for Algorithm 1. In Phase one, we train with 10 epochs, and each epoch has learning rate 0.1. ... In Phase two, we train with 40 epochs, and each epoch has learning rate 0.002. The learning rate reduces by half at the 20-th epoch. |