Preemptive Image Robustification for Protecting Users against Man-in-the-Middle Adversarial Attacks
Authors: Seungyong Moon, Gaon An, Hyun Oh Song7823-7830
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on CIFAR-10 and Image Net show our method can effectively robustify natural images within the given modification budget. |
| Researcher Affiliation | Collaboration | 1 Department of Computer Science and Engineering, Seoul National University, Seoul, Korea 2 Deep Metrics, Seoul, Korea |
| Pseudocode | Yes | Algorithm 1: Preemptive robustification algorithm |
| Open Source Code | Yes | The code is available online 1. https://github.com/snu-mllab/preemptive_robustification |
| Open Datasets | Yes | We evaluate our methods on CIFAR-10 and Image Net by measuring classification accuracies of preemptively robustified images under the grey-box and white-box adversaries. |
| Dataset Splits | No | The paper mentions evaluating on CIFAR-10 and ImageNet, which are standard benchmarks, but does not explicitly provide specific training/validation/test split percentages, absolute sample counts for each split, or reference predefined splits with explicit citations for the validation set in the main text. |
| Hardware Specification | No | No specific hardware (GPU models, CPU models, or cloud computing instances with specifications) used for running experiments is explicitly mentioned in the paper. |
| Software Dependencies | No | The paper mentions 'Py Torch (Paszke et al. 2019)' but does not provide specific version numbers for PyTorch or any other software dependencies needed for replication. |
| Experiment Setup | Yes | As it is natural to assume that the defender and the adversary have the same modification budget, we set δ = ϵ for all experiments. Both the adversaries use 20-step untargeted PGD and Auto Attack (Croce and Hein 2020) to find adversarial examples. For the white-box adversary, we sweep the final perturbation budget ϵ and report the lowest accuracy measured. We set the noise level to σ = 0.1. The noise levels are σ = 0.25 for CIFAR-10 and σ = 1.0 for Image Net. |