Towards a Unified Game-Theoretic View of Adversarial Perturbations and Robustness

Authors: Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted experiments on Res Net-18/34/50 [24], Wide Res Net50-4 [67], Dense Net-161 [26], and VGG-16 [45]. For each DNN, we obtained both the standardly trained version and the adversarially trained version on the Image Net dataset [42]5. We also conducted experiments on Point Net++ [40] learned on the Model Net10 dataset [61], which is a 3D dataset.
Researcher Affiliation Collaboration a Shanghai Jiao Tong University b Key Lab. of Machine Perception, School of Artificial Intelligence, Peking University c Institute for Artificial Intelligence, Peking University d Carnegie Mellon University e Huawei technologies Inc.
Pseudocode No The paper describes methods and concepts but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available online at https://github.com/Jie-Ren/ A-Unified-Game-Teoretic-Interpretation-of-Adversarial-Robustness.
Open Datasets Yes We obtained both the standardly trained version and the adversarially trained version on the Image Net dataset [42]5. We also conducted experiments on Point Net++ [40] learned on the Model Net10 dataset [61], which is a 3D dataset.
Dataset Splits Yes x Ωwas sampled from the validation set.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for conducting the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We set ϵ = 32/255 by following the setting in [62]. The step size was set to 2/255. For fair comparisons, we controlled the perturbation generated for each image to have similar attacking utility of 8. For the cutout method, we set the length of the side of the masked square regions as 112, which was half of the side length of the input sample (dropping α = 25% pixels of the input), following settings in [15].