Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Blurred-Dilated Method for Adversarial Attacks

Authors: Yang Deng, Weibin Wu, Jianping Zhang, Zibin Zheng

NeurIPS 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on the Image Net dataset show that adversarial examples generated by BD achieve significantly higher transferability than the state-of-the-art baselines.
Researcher Affiliation Academia Yang Deng School of Software Engineering Sun Yat-sen University EMAIL Weibin Wu School of Software Engineering Sun Yat-sen University EMAIL Jianping Zhang Department of Computer Science and Engineering The Chinese University of Hong Kong EMAIL Zibin Zheng School of Software Engineering Sun Yat-sen University EMAIL
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper provides links to 'open-source pretrained models' from torchvision and GitHub repositories (footnotes 1, 2, 3) which are third-party resources used by the authors, but it does not include an explicit statement or link for the source code of the Blurred-Dilated (BD) method itself.
Open Datasets Yes Consistent with the previous works [18, 21], we use the Image Net-compatible dataset in the NIPS 2017 adversarial competition [32] as the test set to generate adversarial samples. After modifying source models with our BD, we fine-tune the modified source models with the Image Net training set [25] to recover their classification accuracy. ... Following previous efforts [15, 13], we choose the CIFAR-10 and CIFAR-100 datasets [24], which consist of 60000 images from 10 and 100 classes, respectively.
Dataset Splits No The paper mentions a training set and a test set for CIFAR-10/100 datasets ('officially divided into a training set of 50000 images and a test set of 10000 images') and uses the NIPS 2017 adversarial competition dataset as a test set for ImageNet, but it does not explicitly specify a validation dataset split or its size.
Hardware Specification Yes All experiments were performed with an NVIDIA V100 GPU.
Software Dependencies No The paper mentions using models collected from 'torchvision' (footnote 1) but does not provide specific version numbers for any software, libraries, or frameworks used in the experiments.
Experiment Setup Yes The step size ฮฑ is set to 2/255 when ฯต = 16/255, and 1/255 when ฯต = 8/255 or 4/255. For MI-FGSM, we set the decay factor ยต = 1.0. The iteration number T = 10.