Asymmetric Certified Robustness via Feature-Convex Neural Networks

Authors: Samuel Pfrommer, Brendon Anderson, Julien Piet, Somayeh Sojoudi

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on Malimg malware classification and subsets of the MNIST, Fashion-MNIST, and CIFAR-10 datasets show that feature-convex classifiers attain substantial certified ℓ1, ℓ2, and ℓ -radii while being far more computationally efficient than competitive baselines. 1
Researcher Affiliation Academia Samuel Pfrommer Brendon G. Anderson Julien Piet Somayeh Sojoudi University of California, Berkeley {sam.pfrommer,bganderson,piet,sojoudi}@berkeley.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code for reproducing our results is available on Git Hub.
Open Datasets Yes Experiments on Malimg malware classification and subsets of the MNIST, Fashion-MNIST, and CIFAR-10 datasets show that feature-convex classifiers attain substantial certified ℓ1, ℓ2, and ℓ -radii while being far more computationally efficient than competitive baselines.
Dataset Splits Yes The training set is further subdivided by an 80%-20% validation split.
Hardware Specification Yes All experiments were conducted on a single Ubuntu 20.04 instance with an Nvidia RTX A6000 GPU.
Software Dependencies No The paper mentions 'Ubuntu 20.04' as the operating system and 'MOSEK [6]' (Version 9.0) as an optimization toolbox used for a specific theoretical demonstration in Appendix D. However, it does not specify versions for key deep learning libraries such as PyTorch, TensorFlow, or others that would be necessary to replicate the main experimental methodology.
Experiment Setup Yes Jacobian regularization is also used to improve our certified radii [31]. We choose λ = 0.0075 for CIFAR-10, λ = 0.075 for Malimg and λ = 0.01 for MNIST and Fashion-MNIST. ... All feature-convex networks are trained using SGD with a learning rate of 0.001, momentum 0.9, and exponential learning rate decay with γ = 0.99.