Intriguing Properties of Input-Dependent Randomized Smoothing

Authors: Peter Súkenı́k, Aleksei Kuvshinov, Stephan Günnemann

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present one concrete design of the smoothing variance function and test it on CIFAR10 and MNIST. Our design mitigates some of the problems of classical smoothing and is formally underlined, yet further improvement of the design is still necessary. We test our IDRS and σ(x) function extensively. For both CIFAR10 (Krizhevsky, 2009) and MNIST (Le Cun et al., 1999) datasets, we analyze series of different experimental setups, including experiments with an input-dependent train-time Gaussian data augmentation.
Researcher Affiliation Academia 1Institute of Science and Technology Austria, Klosterneuburg, Austria 2Technical University of Munich, School of Computation, Information and Technology, Munich, Germany 3Munich Data Science Institute, Munich, Germany.
Pseudocode Yes Algorithm 3 Pseudocode for certification and prediction of my method based on (Cohen et al., 2019)
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes We test our IDRS and σ(x) function extensively. For both CIFAR10 (Krizhevsky, 2009) and MNIST (Le Cun et al., 1999) datasets, we analyze series of different experimental setups, including experiments with an input-dependent train-time Gaussian data augmentation.
Dataset Splits No The paper mentions 'training' and 'test' sets for experiments but does not explicitly specify train/validation/test dataset splits or reference predefined validation splits for reproducibility.
Hardware Specification No The paper only vaguely mentions 'our machine' without specifying any concrete hardware details such as GPU/CPU models, processors, or memory specifications used for running experiments.
Software Dependencies No The paper mentions various theoretical concepts and some general software-related terms but does not provide specific version numbers for programming languages, libraries, or frameworks used in the experiments.
Experiment Setup Yes Here, we compare (Cohen et al., 2019) s evaluations for σ = 0.12, 0.25, 0.50 with our evaluations, setting σb = σ, r = 0.01, 0.02, k = 20, m = 5, 1.5 (for CIFAR10 and MNIST, respectively), applied on models trained with Gaussian data augmentation, using constant standard deviation roughly equal to the average test-time σ(x) or test-time σ.