Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization

Authors: Mahyar Fazlyab, Taha Entesari, Aniket Roy, Rama Chellappa

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the MNIST, CIFAR-10, and Tiny-Image Net data sets verify that our proposed algorithm obtains competitively improved results compared to the state-of-the-art.
Researcher Affiliation Academia Department of Electrical and Computer Engineering Johns Hopkins University {mahyarfazlyab, tentesa1, aroy28, rchella4}@jhu.edu
Pseudocode No No structured pseudocode or algorithm blocks are explicitly presented in the paper.
Open Source Code Yes Code available on https://github.com/o4lc/CRM-Lip LT.
Open Datasets Yes Experiments on the MNIST [46], CIFAR-10 [47] and Tiny-Imagement [48] data sets verify that our proposed algorithm obtains competitively improved results compared to the state-of-the-art.
Dataset Splits No The paper mentions training on MNIST, CIFAR-10, and Tiny-ImageNet, and evaluates on a 'test dataset', but it does not explicitly provide specific training/test/validation dataset split percentages, counts, or a clear methodology for splitting beyond general references to test data.
Hardware Specification No The paper mentions parallelized implementation on GPUs, but does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers.
Experiment Setup No The details of the architectures, training process, and most hyperparameters are deferred to the supplementary materials, meaning specific experimental setup details are not present in the main text.