Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses
Authors: Gaurang Sriramanan, Sravanti Addepalli, Arya Baburaj, Venkatesh Babu R
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present details related to the experiments conducted to validate our proposed approach. We first present the experimental results of the proposed attack GAMA, followed by details on evaluation of the proposed defense GAT. The primary dataset used for all our evaluations is CIFAR-10 [20]. We also show results on MNIST [23] and Image Net [11] for the proposed attack GAMA in the main paper and for the proposed defense GAT in Section-6 of the Supplementary. |
| Researcher Affiliation | Academia | Gaurang Sriramanan , Sravanti Addepalli , Arya Baburaj, R.Venkatesh Babu Video Analytics Lab, Department of Computational and Data Sciences Indian Institute of Science, Bangalore, India |
| Pseudocode | Yes | Algorithm 1 Guided Adversarial Margin Attack |
| Open Source Code | Yes | Our code and pre-trained models are available here: https://github.com/val-iisc/GAMA-GAT. |
| Open Datasets | Yes | The primary dataset used for all our evaluations is CIFAR-10 [20]. We also show results on MNIST [23] and Image Net [11] for the proposed attack GAMA in the main paper and for the proposed defense GAT in Section-6 of the Supplementary. |
| Dataset Splits | No | The paper mentions datasets and refers to training and testing, but does not provide specific details on the train/validation/test dataset splits (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper does not provide specific hardware details (like GPU models, CPU types, or memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | The attack is initialized using random Bernoulli noise of magnitude ε. This provides a better initialization within the ε bound when compared to Uniform or Gaussian noise, as the resultant image would be farther away from the clean image in this case, resulting in more reliable gradients initially. The weighting factor λ of the ℓ2 term in the loss function is linearly decayed to 0 over τ steps. We use an initial step size of η for GAMA-PGD and γ for GAMA-FW, and decay this by a factor of d at intermediate steps. |