Double Bubble, Toil and Trouble: Enhancing Certified Robustness through Transitivity

Authors: Andrew Cullen, Paul Montague, Shijie Liu, Sarah Erfani, Benjamin Rubinstein

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate the performance of our proposed certification improvements, we considered the certified radius produced for MNIST [19], CIFAR-10 [18], and Tiny-Imagenet [16], the latter of these is a 200-class variant of Imagenet [43] which downsamples images to 3 60 60.
Researcher Affiliation Collaboration Andrew C. Cullen1 Paul Montague2 Shijie Liu1 Sarah M. Erfani1 Benjamin I.P. Rubinstein1 1School of Computing and Information Systems, University of Melbourne, Parkville, Australia 2Defence Science and Technology Group, Adelaide, Australia
Pseudocode Yes Algorithm 1 Single Bubble Loop.
Open Source Code Yes The full code to implement our experiments can be found at https://github.com/andrew-cullen/DoubleBubble.
Open Datasets Yes To evaluate the performance of our proposed certification improvements, we considered the certified radius produced for MNIST [19], CIFAR-10 [18], and Tiny-Imagenet [16]
Dataset Splits No The paper states training details like 'Training employed Cross Entropy loss with a batch size of 128 over 50 epochs' and 'Training occurred using SGD over 80 epochs', but it does not specify explicit training/validation/test dataset splits (e.g., 80/10/10 percentages or sample counts for each split).
Hardware Specification Yes For both MNIST and CIFAR-10, our experimentation utilised a single NVIDIA P100 GPU core with 12 GB of GPU RAM... Tiny-Imagenet training and evaluation utilised 3 P100 GPU s
Software Dependencies No The paper states that 'All datasets were modelled using the Resnet18 architecture in Py Torch [32]', but it does not specify the version number for PyTorch or any other software dependencies.
Experiment Setup Yes Training employed Cross Entropy loss with a batch size of 128 over 50 epochs... Parameter optimisation was performed with Adam [17], with the learning rate set as 0.001. ...Training occurred using SGD over 80 epochs, with a starting learning rate of 0.1, decreasing by a factor of 10 after 30 and 60 epochs, and momentum set to 0.9.