Accelerating Certified Robustness Training via Knowledge Transfer

Authors: Pratik Vaishnavi, Kevin Eykholt, Amir Rahmati

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on CIFAR-10 show that CRT speeds up certified robustness training by 8 on average across three different architecture generations while achieving comparable robustness to state-of-the-art methods. We also show that CRT can scale to large-scale datasets like Image Net.
Researcher Affiliation Collaboration Pratik Vaishnavi Stony Brook University pvaishnavi@cs.stonybrook.edu Kevin Eykholt IBM Research kheykholt@ibm.com Amir Rahmati Stony Brook University amir@cs.stonybrook.edu
Pseudocode Yes Algorithm 1 Certified Robustness Transfer (CRT)
Open Source Code Yes 3. (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See Appendix ??. 4. (c) Did you include any new assets either in the supplemental material or as a URL? [Yes] See Appendix ??.
Open Datasets Yes Our main results are generated using the CIFAR-10 dataset [18], but we also demonstrate the effectiveness of CRT on Image Net [5] (Section 5.3). Both these datasets are open-source and free for non-commercial use.
Dataset Splits No On CIFAR-10, we compute these metrics using the entire test set. The paper does not specify how the training and validation splits were created or their sizes.
Hardware Specification Yes All classifiers were trained on the same machine with a single Nvidia Titan V GPU.
Software Dependencies No The paper mentions training with 'Stochastic Gradient Descent' and using 'pytorch-cifar' (implying PyTorch), but does not specify version numbers for any software dependencies.
Experiment Setup Yes All CRT classifiers were trained using Stochastic Gradient Descent till convergence (200 epochs), with a batch size of 128. Further hyperparameter details are available in Appendix ??.