Understanding and Improving Ensemble Adversarial Defense

Authors: Yian Deng, Tingting Mu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Being tested over various existing ensemble adversarial defense techniques, i GAT is capable of boosting their performance by up to 17% evaluated using CIFAR10 and CIFAR100 datasets under both white-box and black-box attacks.
Researcher Affiliation Academia Yian Deng Department of Computer Science The University of Manchester Manchester, UK, M13 9PL yian.deng@manchester.ac.uk Tingting Mu Department of Computer Science The University of Manchester Manchester, UK, M13 9PL tingting.mu@manchester.ac.uk
Pseudocode No The paper describes the proposed methods and their steps in narrative text, but it does not include a formally structured 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes The source codes and pre-trained models can be found at https://github.com/xqsi/i GAT.
Open Datasets Yes CIFAR-10 and CIFAR-100 datasets are used for evaluation, both containing 50,000 training and 10,000 test images [51].
Dataset Splits No The paper mentions '50,000 training and 10,000 test images' for CIFAR-10 and CIFAR-100, but does not explicitly specify a separate validation split size or percentage.
Hardware Specification Yes Each experimental run used one NVIDIA V100 GPU plus 8 CPU cores.
Software Dependencies No The paper mentions various attacks (PGD, CW, SH, AA) and models (Res Net-20), but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup Yes The two hyper-parameters are set as α = 0.25 and β = 0.5 for So E, while α = 5 and β = 10 for ADP, CLDL and DVERGE, found by grid search. The i GAT training uses a batch size of 512, and multi-step leaning rates of {0.01, 0.002} for CIFAR10 and {0.1, 0.02, 0.004} for CIFAR100.