Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks

Authors: Hassan Dbouk, Naresh Shanbhag

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of GDWS via extensive experiments on CIFAR-10, SVHN, and Image Net datasets.
Researcher Affiliation Academia Hassan Dbouk & Naresh R. Shanbhag Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana, IL 61801 {hdbouk2,shanbhag}@illinois.edu
Pseudocode Yes Algorithm 1: (MEGO) Minimum Error Complexity-constrained GDWS Optimal Approximation Algorithm 2: (LEGO) Least Complex Errorconstrained GDWS Optimal Approximation Algorithm 3: Constructing GDWS networks
Open Source Code Yes Our code can be found at https://github.com/hsndbk4/ GDWS.
Open Datasets Yes We demonstrate the effectiveness of GDWS via extensive experiments on CIFAR-10, SVHN, and Image Net datasets.
Dataset Splits No The paper mentions training and testing, and refers to an appendix for "Details on the training/evaluation setup", but does not explicitly provide training/validation/test dataset splits in the main text.
Hardware Specification Yes We measure the throughput in FPS by mapping the networks onto an NVIDIA Jetson Xavier via native Py Torch [24] commands... (a) robust accuracy against ℓ -bounded perturbations vs frames-per-second measured on an NVIDIA Jetson Xavier, and (b) total time required to implement these methods measured on a single NVIDIA 1080 Ti GPU.
Software Dependencies No The paper mentions using "Py Torch [24]" but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes We report Arob against ℓ bounded perturbations generated via PGD [21] with standard attack strengths: ϵ = 8/255 with PGD-100 for both CIFAR-10 [17] and SVHN [23] datasets, and ϵ = 4/255 with PGD-50 for the Image Net [29] dataset.