Towards Interpreting and Utilizing Symmetry Property in Adversarial Examples

Authors: Shibin Mei, Chenglong Zhao, Bingbing Ni, Shengchao Yuan

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct extensive experiments to evaluate the performance of the proposed adversarial defense method, which well demonstrates the favorable performance of our method compared with recent prior arts.
Researcher Affiliation Academia Shanghai Jiao Tong University, Shanghai 200240, China {adair327, cl-zhao, nibingbing, sc yuan}@sjtu.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described, such as a specific repository link, an explicit code release statement, or code in supplementary materials.
Open Datasets Yes We conduct experiments on several benchmark datasets including CIFAR-10 and SVHN. We also conduct experiments on larger datasets such as CIFAR100 and Tiny Image Net. All these networks are trained for 200 epochs with batch size 128, and optimized by stochastic gradient descent (SGD), with Nesterov momentum 0.9 and weight decay 5 × 10−4.
Dataset Splits No The paper mentions training and testing on benchmark datasets like CIFAR-10, SVHN, CIFAR100, and Tiny Image Net, but does not explicitly provide specific details about a validation dataset split (e.g., percentages, sample counts, or how it was used).
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers, such as Python version or deep learning library versions like PyTorch or TensorFlow.
Experiment Setup Yes All these networks are trained for 200 epochs with batch size 128, and optimized by stochastic gradient descent (SGD), with Nesterov momentum 0.9 and weight decay 5 × 10−4. The learning rate is set as 0.1 and divided by 10 at 100 and 150 epochs, respectively. We empirically set the parameter ζ = 0.1 and α = 2 in our experiments.