Neural Architecture Dilation for Adversarial Robustness

Authors: Yanxi Li, Zhaohui Yang, Yunhe Wang, Chang Xu

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on real-world datasets and benchmark neural networks demonstrate the effectiveness of the proposed algorithm to balance the accuracy and adversarial robustness.
Researcher Affiliation Collaboration 1 School of Computer Science, University of Sydney, Australia 2 Huawei Noah s Ark Lab 3 Key Lab of Machine Perception (MOE), Department of Machine Intelligence, Peking University, China
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes We perform dilation under white-box attacks on CIAFR-10/100 [10] and Image Net [21] and under black-box attacks on CIFATR-10. The NADAR framework requires a backbone to be dilated. Following previous works [18, 22, 33, 32], we use the 10 times wider variant of Res Net, i.e. the Wide Res Net 34-10 (WRN34-10) [31], on both CIFAR dataset, and use Res Net-50 [9] on Image Net.
Dataset Splits Yes During the dilation phase, the training set is split into two equal parts. One is used as the training set for network weights optimization, and the other one is used as the validation set for architecture parameter optimization.
Hardware Specification Yes We also report the GPU days cost to train the networks with NVIDIA V100 GPU.
Software Dependencies No The paper does not specify the versions of any software dependencies (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes For PGD and Free AT, we set the number of steps K = 4 and the step size εS = 2