Identification of the Adversary from a Single Adversarial Example
Authors: Minhao Cheng, Rui Min, Haochen Sun, Pin-Yu Chen
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, the effectiveness of our proposed framework is evaluated by extensive experiments with different model architectures, adversarial attacks, and datasets. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science & Engineering, The Hong Kong University of Science and Technology, Hong Kong 2IBM Research, NY, USA. |
| Pseudocode | No | The paper describes methods in narrative text and does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is publicly available at https://github.com/ rmin2000/adv_tracing.git. |
| Open Datasets | Yes | We conduct our experiments on three popular image classification datasets GTSRB (Stallkamp et al., 2012), CIFAR-10 (Krizhevsky et al., 2009) and Tiny-Image Net (Le & Yang, 2015). |
| Dataset Splits | No | The paper mentions using CIFAR-10, GTSRB, and Tiny-Image Net datasets but does not explicitly state the train/validation/test dataset splits with percentages, absolute sample counts, or explicit references to predefined splits with citations. |
| Hardware Specification | Yes | All our experiments were implemented in Pytorch and conducted using an RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions software like Pytorch and Adversarial Robustness Toolbox (ART) but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | In the pretraining stage, the base model is trained with Adam optimizer with the learning rate of 10 3 and a batch size of 128 for 50 epochs. For every copy s watermark, we independently randomly sample 100 pixels to mask for both CIFAR-10 and GTSRB and increase the mask size to 400 pixels for Tiny-Image Net which keep the masking rate around 3.26%. |