Consistency Regularization for Certified Robustness of Smoothed Classifiers
Authors: Jongheon Jeong, Jinwoo Shin
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments under various deep neural network architectures and datasets show that the certified ℓ2-robustness can be dramatically improved with the proposed regularization, even achieving better or comparable results to the state-of-the-art approaches with significantly less training costs and hyperparameters. |
| Researcher Affiliation | Academia | School of Electrical Engineering Graduate School of AI Korea Advanced Institute of Science and Technology (KAIST) Daejeon, South Korea |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | Yes | Code is available at https://github.com/jh-jeong/smoothing-consistency. |
| Open Datasets | Yes | We verify the effectiveness of our proposed regularization based on extensive evaluation covering MNIST [21], CIFAR-10 [20], and Image Net [30] classification datasets. |
| Dataset Splits | No | The paper mentions training and testing on datasets like CIFAR-10 and ImageNet, and refers to following training details from prior works [10, 32], but does not explicitly provide specific train/validation/test dataset splits within its own text. |
| Hardware Specification | Yes | In this experiment, every model is trained on CIFAR-10 using one GPU of NVIDIA TITAN X (Pascal). |
| Software Dependencies | No | The paper mentions using well-known models like ResNet but does not provide specific version numbers for software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | For a fair comparison, we follow the same training details used in Cohen et al. [10] and Salman et al. [32]. For each model configuration, we consider three different models as varying the noise level σ {0.25, 0.5, 1.0}. During inference, we apply randomized smoothing with the same σ used in the training. When our regularization is used, we use m = 2 and η = 0.5 unless otherwise specified. |