Certifying Ensembles: A General Certification Theory with S-Lipschitzness
Authors: Aleksandar Petrov, Francisco Eiras, Amartya Sanyal, Philip Torr, Adel Bibi
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We use the ensembles trained by Horv ath et al. (2021) that they have released publicly7. The classifiers are based on the Res Net20 and Res Net50 architectures (He et al., 2016) and are trained respectively on CIFAR10 (Krizhevsky, 2009) and Image Net (Russakovsky et al., 2015). We use randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019) to obtain individual classifiers with known continuity properties (S). |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of Oxford, Oxford, UK 2Department of Engineering Science, University of Oxford, Oxford, UK 3ETH AI Center, ETH Z urich, Z urich, Switzerland. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states: 'We use the ensembles trained by Horv ath et al. (2021) that they have released publicly7. The trained models are available at https://github.com/eth-sri/smoothing-ensembles'. This refers to code and models used by the authors from a third-party, not their own source code for the methodology described in this paper. |
| Open Datasets | Yes | We use the ensembles trained by Horv ath et al. (2021) ... trained respectively on CIFAR10 (Krizhevsky, 2009) and Image Net (Russakovsky et al., 2015). |
| Dataset Splits | No | The paper mentions 'train split of CIFAR10' and 'train split of Image Net', implying standard splits, but it does not provide exact percentages, absolute sample counts, or explicit mention of a 'validation' split or its details. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or cloud computing specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | We use randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019) to obtain individual classifiers with known continuity properties (S). Concretely, a model smoothed with independent Gaussian noise with variance σ2. We consider the following ensembles: i. Ensemble of N=6 Res Net20 classifiers trained on CIFAR10 (K=10), trained and smoothed with σ=0.25. ii. Ensemble of N=6 Res Net20 classifiers trained on CIFAR10 (K=10), trained and smoothed with σ=0.50. iii. Ensemble of N=6 Res Net20 classifiers trained on CIFAR10 (K=10), trained and smoothed with σ=1.00. iv. Ensemble of N=3 Res Net50 classifiers trained on Image Net (K=1000), trained and smoothed with σ=1.00. |