Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and Flatness

Authors: Mingyuan Fan, Xiaodan Li, Cen Chen, Wenmeng Zhou, Yaliang Li

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments across widely used benchmark datasets and various real-world applications show that TPA can craft more transferable adversarial examples compared to state-of-the-art baselines.
Researcher Affiliation Collaboration Mingyuan Fan1, Xiaodan Li1,3, Cen Chen1,2 , Wenmeng Zhou3, Yaliang Li4 1School of Data Science & Engineering, East China Normal University, China 2 The State Key Laboratory of Blockchain and Data Security, Zhejiang University, China 3Alibaba Group, Hangzhou, China 4Alibaba Group, Bellevue, WA, USA
Pseudocode No The paper describes its proposed approach (TPA) through mathematical formulations and explanations of the optimization target and approximate solution, but it does not include a structured pseudocode or algorithm block.
Open Source Code Yes The source codes are available in https://github.com/fmy266/TPA.
Open Datasets Yes Dataset. We randomly select 10000 images from Image Net.
Dataset Splits No The paper mentions using 10000 images from Image Net but does not explicitly provide training, validation, or testing dataset splits, or refer to standard predefined splits with specific details for reproducibility.
Hardware Specification No The paper does not explicitly describe the specific hardware used for running its experiments, such as GPU models, CPU types, or memory specifications. The justification in the checklist only vaguely states 'Common GPUs are capable of running our experiments.'
Software Dependencies No The paper does not explicitly provide specific software dependencies with version numbers (e.g., Python version, library versions) that are needed to replicate the experiment.
Experiment Setup Yes For TPA, we set λ = 5, b = 16, k = 0.05, N = 10. Moreover, for all methods, we set iteration of 203, ϵ of 16, and step size of 1.6.