Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

On the Limitations of Stochastic Pre-processing Defenses

Authors: Yue Gao, I Shumailov, Kassem Fawaz, Nicolas Papernot

NeurIPS 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we empirically and theoretically investigate such stochastic pre-processing defenses and demonstrate that they are ๏ฌ‚awed.
Researcher Affiliation Collaboration Yue Gao University of Wisconsin Madison EMAIL Ilia Shumailov University of Cambridge & Vector Institute EMAIL Kassem Fawaz University of Wisconsin Madison EMAIL Nicolas Papernot University of Toronto & Vector Institute EMAIL
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks. It provides mathematical formulations but not pseudocode.
Open Source Code Yes Our code is available at https://github.com/wi-pi/stochastic-preprocessing-defenses.
Open Datasets Yes We conduct all experiments on Image Net [30] and Image Nette [9].
Dataset Splits Yes For Image Net, our test data consists of 1,000 images randomly sampled from the validation set. Image Nette is a ten-class subset of Image Net, and we test on its validation set. ... These models are ๏ฌne-tuned on the training data processed by tested defenses. ... More details of datasets and models can be found in Appendices D.1 and D.2.
Hardware Specification Yes Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix D.
Software Dependencies No The paper mentions that detailed settings are in Appendix D, but the main text does not provide specific software dependencies (e.g., library names with version numbers) needed to replicate the experiment.
Experiment Setup Yes All attacks use maximum 1 perturbation = 8/255 with step size chosen from 2 {1/255, 2/255}. ... We only use constant step sizes and no random restarts for PGD. ... More details and intuitions of the attack s settings and implementation can be found in Appendix D.4.