SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness
Authors: Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, Do-Guk Kim, Jinwoo Shin
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results demonstrate that the proposed method can significantly improve the certified ℓ2-robustness of smoothed classifiers compared to existing state-of-the-art robust training methods. |
| Researcher Affiliation | Collaboration | Jongheon Jeong1 Sejun Park2 Minkyu Kim3 Heung-Chang Lee4 Doguk Kim5 Jinwoo Shin3,1 1School of Electrical Engineering, KAIST 2Vector Institute for Artificial Intelligence 3Kim Jaechul Graduate School of AI, KAIST 4Kakao Enterprise 5Department of Artificial Intelligence, Inha University |
| Pseudocode | Yes | Algorithm 1 in Appendix A demonstrates a concrete training procedure of Smooth Mix using m samples of δ for the Monte Carlo approximation. |
| Open Source Code | Yes | Code is available at https://github.com/jh-jeong/smoothmix. |
| Open Datasets | Yes | We evaluate the effectiveness of our method extensively on MNIST [32], CIFAR-10 [29], and Image Net [48] classification datasets. |
| Dataset Splits | No | The paper mentions using the full MNIST test dataset and full CIFAR-10 test dataset, but does not explicitly provide specific train/validation/test dataset split percentages, sample counts for validation, or specific methodologies for setting up a validation split. It relies on standard dataset usage without explicit declaration for training and validation portions. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions "Py Torch implementation of CERTIFY" but does not specify version numbers for PyTorch or any other software libraries, environments, or solvers used for the experiments. |
| Experiment Setup | Yes | The detailed experimental setups, e.g., training details, datasets, and hyperparameters for the baseline methods, are specified in Appendix C. When Smooth Mix is used, we consider a fixed hyperparameter value for α = 1.0 and m = 4... we set T = 2, 4, 8 for the models with σ = 0.25, 0.5, 1.0, respectively. |