Invariance-Aware Randomized Smoothing Certificates

Authors: Jan Schuchardt, Stephan Günnemann

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally demonstrate that the provably tight certificates can offer much stronger guarantees, but that in practical scenarios the orbit-based method is a good approximation.
Researcher Affiliation Academia Jan Schuchardt Technical University of Munich j.schuchardt@tum.de Stephan Günnemann Technical University of Munich s.guennemann@tum.de
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes A reference implementation will be made available at https://www.cs.cit.tum.de/daml/invariance-smoothing.
Open Datasets Yes We consider two datasets: 3D point cloud representations of Model Net40 [126], which consists of CAD models from 40 different categories, and 2D point cloud representations of MNIST [127].
Dataset Splits No The paper mentions 'default test sets' but does not specify exact percentages, sample counts, or a detailed methodology for training/validation splits.
Hardware Specification Yes one can use a large number of samples to obtain narrow bounds at little computational cost (e.g. 0.59s for 100000 samples per confidence bound on an Intel Xeon E5-2630 CPU).
Software Dependencies No The paper does not list specific software components with version numbers required to replicate the experiment.
Experiment Setup Yes All parameters and experimental details are specified in Appendix B. We use 10000 samples per confidence bound and set α = 0.001, i.e. all certificates hold with 99.9% probability.