DeformRS: Certifying Input Deformations with Randomized Smoothing

Authors: Motasem Alfarra, Adel Bibi, Naeemullah Khan, Philip H.S. Torr, Bernard Ghanem6001-6009

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on MNIST, CIFAR10, and Image Net show competitive performance of DEFORMRS-PAR achieving a certified accuracy of 39% against perturbed rotations in the set [ 10 , 10 ] on Image Net.
Researcher Affiliation Academia 1 King Abdullah University of Science and Technology (KAUST), 2 University of Oxford
Pseudocode No The paper contains mathematical definitions and theorems but no explicit pseudocode or algorithm blocks.
Open Source Code Yes Official Code: https://github.com/MotasemAlfarra/DeformRS.
Open Datasets Yes Setup. We follow standard practices prior art, e.g. LI and FBV, and conduct experiments on MNIST (Le Cun 1998), CIFAR10 (Krizhevsky 2012), and Image Net (Russakovsky et al. 2015) datasets.
Dataset Splits No The paper mentions a 'test set' for evaluation and 'cross-validated over λ' for reporting results but does not provide specific details on training, validation, or test dataset splits (e.g., percentages or sample counts).
Hardware Specification Yes In all of our training experiments, we used a single NVIDIA 1080-TI for CIFAR10 and MNIST experiments while we used 2 NVIDIA V100 to fine tune Image Net models. For the certification experiments, we use a single GPU per experiment (NVIDIA 1080-TI for CIFAR10 and MNIST and NVIDIA V100 for Image Ne).
Software Dependencies No The paper mentions using 'publicly available code (Cohen, Rosenfeld, and Kolter 2019)' but does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks).
Experiment Setup Yes For experiments on MNIST and CIFAR10, we certify Res Net18 (He et al. 2016) trained for 90 epochs with a learning rate of 0.1, momentum of 0.9, weight decay of 10 4, and learning rate decay at epochs 30 and 60 by a factor of 0.1. For Image Net experiments, we certify a fine-tuned pretrained Res Net50 for 30 epochs using SGD with a learning rate of 10 3 that decays at every 10 epochs by a factor of 0.1.