Generating Distributional Adversarial Examples to Evade Statistical Detectors
Authors: Yigitcan Kaya, Muhammad Bilal Zafar, Sergul Aydore, Nathalie Rauschmayr, Krishnaram Kenthapadi
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our attack on image classification tasks with CNNs, where adversarial attacks are most well-studied. Our main focus in this section is evaluating SIA against DDs. We experiment on two popular datasets: CIFAR-10 (Krizhevsky et al., 2009) and Tiny-Image Net (Deng et al., 2009). |
| Researcher Affiliation | Collaboration | 1University of Maryland College Park 2Amazon Web Services 3Fiddler AI. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an unambiguous statement about releasing its own source code for the described methodology, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | We experiment on two popular datasets: CIFAR-10 (Krizhevsky et al., 2009) and Tiny-Image Net (Deng et al., 2009). |
| Dataset Splits | No | The paper mentions a 'training set' and 'holdout set' but does not explicitly provide details about a distinct 'validation set' split or its size for the models trained. |
| Hardware Specification | No | The paper mentions using a 'commodity GPU' but does not specify the exact GPU model, CPU, or other detailed hardware specifications used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow, CUDA versions) used for its implementation. |
| Experiment Setup | Yes | Unless specified otherwise, we set ε = 0.03, following prior work (Madry et al., 2018). We perform 200 PGD iterations, use a scheduler that periodically reduces the step size, and craft 250 AEs at once as a batch. |