Functional Adversarial Attacks
Authors: Cassidy Laidlaw, Soheil Feizi
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experiment by attacking defended and undefended classifiers with Re Color Adv, by itself and in combination with other attacks. We find that Re Color Adv is a strong attack, reducing the accuracy of a Res Net-32 trained on CIFAR-10 to 3.0%. |
| Researcher Affiliation | Academia | Cassidy Laidlaw University of Maryland claidlaw@umd.edu Soheil Feizi University of Maryland sfeizi@cs.umd.edu |
| Pseudocode | No | The paper describes the methods and optimization process using textual descriptions and mathematical formulations, but it does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We release our code at https://github.com/cassidylaidlaw/Re Color Adv. |
| Open Datasets | Yes | We evaluate Re Color Adv against defended and undefended neural networks on CIFAR-10 [13] and Image Net [20]. |
| Dataset Splits | No | The paper uses standard datasets like CIFAR-10 and Image Net but does not explicitly state the training, validation, or test split percentages or sample counts in the main text. |
| Hardware Specification | Yes | All experiments were run on NVIDIA V100 GPUs. |
| Software Dependencies | No | The paper mentions using 'PyTorch [19]' and 'Adam optimizer [12]' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We use the standard Adam optimizer [12] with a learning rate of 0.001. |