Spear: Evaluate the Adversarial Robustness of Compressed Neural Models

Authors: Chong Yu, Tao Chen, Zhongxue Gan, Jiayuan Fan

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the proposed Spear attack technique can generally be applied to various networks and tasks through quantitative and ablation experiments.
Researcher Affiliation Academia Chong Yu1 , Tao Chen2, , Zhongxue Gan1, and Jiayuan Fan1 1Academy for Engineering and Technology, Fudan University 2School for Information Science and Technology, Fudan University
Pseudocode No The paper includes mathematical formulations and workflow diagrams but no structured pseudocode or algorithm blocks labeled as such.
Open Source Code No The paper links to third-party tools and models (e.g., PyTorch, NVIDIA libraries, pre-trained models) that were used, but it does not provide a link or explicit statement about the availability of the source code for the Spear attack methodology developed in this paper.
Open Datasets Yes large-scale datasets, like Image Net [Deng et al., 2009], COCO [Lin et al., 2014], etc. ... MNIST [Le Cun et al., 1998], CIFAR-10 [Krizhevsky et al., 2009], and CIFAR-100 [Krizhevsky et al., 2009].
Dataset Splits No The paper mentions using a 'test set' for evaluation and 'training dataset' in the ablation study but does not explicitly provide the specific train/validation/test dataset splits (e.g., percentages or sample counts) for their experiments.
Hardware Specification Yes All of the state-of-the-art methods and Spear attack training and fine-tuning experimental results are obtained with V100 [NVIDIA, 2017] and A100 [NVIDIA, 2020a] GPU clusters.
Software Dependencies Yes we choose Py Torch [Paszke et al., 2017] with version 1.8.0 as the framework to implement all algorithms.
Experiment Setup Yes The loss adjustment parameters among the prediction loss (α1, α2), the distillation loss (β1, β2) and the adversarial loss (γ) apply 1, 1.5, 2, 4, 1, respectively. ... apply 1, 1.5, 3, 6, 1.5, respectively.