IntensPure: Attack Intensity-aware Secondary Domain Adaptive Diffusion for Adversarial Purification

Authors: Eun-Gi Lee, Moon Seok Lee, Jae Hyun Yoon, Seok Bong Yoo

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on diverse attacks demonstrate that Intens Pure outperforms the existing methods in terms of rank-1 accuracy.
Researcher Affiliation Academia Eun-Gi Lee , Moon Seok Lee , Jae Hyun Yoon and Seok Bong Yoo Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, Korea sbyoo@jnu.ac.kr
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Our source code and appendix are available at https://github.com/st0421/Intense Pure.
Open Datasets Yes For the experiments, we used two real-world datasets: Market1501 [Zheng et al., 2015] and Duke MTMC-re ID [Ristani et al., 2016].
Dataset Splits No The paper specifies training and testing sets for Market1501 (750 IDs for training, 751 IDs for testing) and Duke MTMC-re ID (702 IDs for both training and testing), but does not explicitly mention a separate validation dataset split.
Hardware Specification Yes The Intens Pure network was trained on a single A100 GPU.
Software Dependencies No The paper mentions using Sci Py and leveraging Res Net50 architecture with pre-trained weights, but does not provide specific version numbers for software dependencies or libraries required for reproduction.
Experiment Setup Yes For training the estimator, the training set is selected in the same way as by [Wang et al., 2021a], perturbing query samples in the training set with only Metric-FGSM and Deep Mis-Ranking with attack intensities from ϵ = 0 to 16. The regression model is implemented with two hidden layers with 512 and 256 nodes, respectively. For the experiments, we set the total time steps to 1,000. The initial and final values of the linear noise schedule are set to 1e-6 and 9e-2, respectively. The training process encompassed 700 epochs, with a learning rate of 1e-4.