Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds

Authors: Yujia Huang, Huan Zhang, Yuanyuan Shi, J. Zico Kolter, Anima Anandkumar

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, we show that our method consistently outperforms state-of-the-art methods in both clean and certified accuracy on MNIST, CIFAR-10 and Tiny Image Net datasets with various network architectures.
Researcher Affiliation Collaboration Yujia Huang1 Huan Zhang2 Yuanyuan Shi3 J. Zico Kolter2,4 Anima Anandkumar1,5 1California Institute of Technology 2Carnegie Mellon University 3UC San Diego 4Bosch Center for AI 5NVIDIA
Pseudocode Yes Algorithm 1: Local Lipchitz based Certifiably Robust Training
Open Source Code Yes Our code is available at https://github. com/yjhuangcd/local-lipschitz.
Open Datasets Yes We train with our method to certify robustness within a 2 ball of radius 1.58 on MNIST [31] and 36/255 on CIFAR-10 [32] and Tiny-Imagenet 1 on various network architectures. 1https://tiny-imagenet.herokuapp.com
Dataset Splits No The paper states 'For more details and hyper-parameters in training, please refer to Appendix C.' However, it does not explicitly provide specific details on validation dataset splits, such as percentages, sample counts, or methodology for creating a validation set from the mentioned datasets in the main text.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU specifications, or memory.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup Yes For more details and hyper-parameters in training, please refer to Appendix C.