A Theory of Transfer-Based Black-Box Attacks: Explanation and Implications

Authors: Yanbo Chen, Weiwei Liu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper studies transfer-based attacks under a unified theoretical framework. We propose an explanatory model, called the manifold attack model, that formalizes popular beliefs and explains the existing empirical results.
Researcher Affiliation Academia Yanbo Chen, Weiwei Liu School of Computer Science, Wuhan University National Engineering Research Center for Multimedia Software, Wuhan University Institute of Artificial Intelligence, Wuhan University Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University {yanbo.acad,liuweiwei863}@gmail.com
Pseudocode No The paper contains mathematical definitions, propositions, theorems, and proofs, but no pseudocode or clearly labeled algorithm blocks are present.
Open Source Code No The paper does not contain any statements or links indicating that source code for the described methodology is publicly available.
Open Datasets No The numerical experiments use a synthetic dataset described as 'a mixture of two Gaussian distributions' and a '2-dimensional plane embedded in R3'. There is no concrete access information (link, citation) provided for this dataset.
Dataset Splits No The paper mentions training and testing in the context of numerical experiments ('100% testing accuracy') but does not specify exact training, validation, or test dataset splits (e.g., percentages or sample counts).
Hardware Specification Yes Our experiments are conducted on an Ubuntu 64-bit Linux workstation, having a 10-core Intel Xeon Silver CPU (2.20 GHz) and 4 Nvidia Ge Force RTX 2080 Ti GPUs with 11GB graphics memory.
Software Dependencies No The paper mentions 'stochastic gradient descent (SGD)' as the optimizer, but it does not specify any software libraries or frameworks with version numbers (e.g., Python, PyTorch, TensorFlow, or specific solver versions).
Experiment Setup No The paper mentions the use of 'stochastic gradient descent (SGD)' and describes the parameters of the synthetic data distribution ('mixture of two Gaussian distributions with means (2, 2, 0) and ( 2, 2, 0), and covariance σ = 0.5'), but it does not provide specific training hyperparameters such as learning rate, batch size, or number of epochs.