GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing
Authors: Zhongkai Hao, Chengyang Ying, Yinpeng Dong, Hang Su, Jian Song, Jun Zhu
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on several datasets demonstrate the effectiveness of our approach for robustness certification against multiple kinds of semantic transformations and corruptions, which is not achievable by the alternative baselines. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science & Technology, Institute for AI, BNRist Center, Tsinghua-Bosch Joint ML Center, THBI Lab, Tsinghua University 3Real AI 5Tsinghua University-China Mobile Communications Group Co., Ltd. Joint Institute |
| Pseudocode | No | The paper describes algorithms conceptually and mathematically but does not present them in a structured pseudocode or algorithm block format. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We use MNIST (Lecun et al., 1998), CIFAR-10, and CIFAR100 (Krizhevsky et al., 2009) datasets to verify our methods. |
| Dataset Splits | No | The paper mentions applying data augmentation but does not provide specific train/validation/test dataset splits with percentages or counts. |
| Hardware Specification | Yes | The training process of classifiers and certification for semantic transformations are done on 2080Ti GPUs. |
| Software Dependencies | No | The paper mentions software components like U-Net, Batch Norm, Group Norm, and L1-loss but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | All models are trained by using an Adam optimizer (Kingma & Ba, 2015) with an initial learning rate of 0.001 that decays every 50 epochs until convergence. |