Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Adversarial-Inspired Backdoor Defense via Bridging Backdoor and Adversarial Attacks

Authors: Jia-Li Yin, Weijian Wang, Lyhwa , Wei Lin, Ximeng Liu

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments on various datasets against six state-of-the-art backdoor attacks, the AIBD-trained models on poisoned data demonstrate superior performance over the existing defense methods. Experiments Experimental Settings Backdoor Attacks. We use 6 the most common backdoor attacks in our experiments: (1) Bad Nets (Gu, Dolan-Gavitt, and Garg 2017), (2) Blend attack (Chen et al. 2017), (3) Sinusoidal signal attack (SIG) (Barni, Kallas, and Tondi 2019), (4) Wa Net attack (Nguyen and Tran 2020b), 5) Trojan attack (Liu et al. 2018), and 6) Dynamic attack (Nguyen and Tran 2020a). We evaluate the performance of these attacks and defense methods on two benchmark datasets: CIFAR10 and GTSRB datasets.
Researcher Affiliation Academia 1 Fujian Province Key Laboratory of Information Security and Network Systems, Fuzhou 350108, China 2 College of Computer and Data Science, Fuzhou University, Fuzhou 350118, China 3 Fujian Provincial Key Laboratory of Big Data Mining and Applications, Fuzhou 350118, China 4 College of Computer Science and Mathematics, Fujian University of Technology, Fuzhou 350118, China 5 Lion Rock Labs of Cyberspace Security, CTIHE, Hong Kong, China
Pseudocode Yes Algorithm 1: Adversarial-Inspired Backdoor Defense(AIBD) Input: Training dataset D, and task model f. Parameter: Retrain epoch R Output: clean model fc. 1: Train an infected model fp using the whole dataset. 2: Perform PGD attack on each sample (xi, yi) D. Record the iterative number ki of each sample xi and their adversarial labels ˆyi. 3: Sort all the samples according to ki. 4: for q = 20%, 18%..., 2% do 5: Separate D into Dc and Dt by the proportion q using Eq. (5). 6: Replace the labels of samples in Dt with their adversarial labels ˆyi using Eq. (6). 7: Train the task model f using Eq. (1). 8: if ACC is close to ACCp then 9: fc = f 10: break 11: end if 12: end for 13: return clean model fc.
Open Source Code No No specific mention of open-source code for the methodology described in this paper is provided.
Open Datasets Yes We evaluate the performance of these attacks and defense methods on two benchmark datasets: CIFAR10 and GTSRB datasets.
Dataset Splits Yes We experiment on a Res Net-18 with Wa Net backdoor injected in the training time on CIFAR-10 dataset. ... We evaluate the performance of these attacks and defense methods on two benchmark datasets: CIFAR10 and GTSRB datasets.
Hardware Specification No No specific hardware details (like GPU models, CPU types, or memory amounts) used for running the experiments are mentioned in the paper.
Software Dependencies No For our AIBD, we used the SGD optimizer with an adversarial attack step size of 1/255.
Experiment Setup Yes For the model architectures, we use Res Net18 on CIFAR-10 and Wide Res Net (WRN-161) on GTSRB following previous works. It is important to note that we do not apply any additional data augmentations in model training as they would hinder the backdoor effects (Liu et al. 2020). ... For our AIBD, we used the SGD optimizer with an adversarial attack step size of 1/255. ... Here we first initialize q as 20% since the poisoning rate is usually under 20% according to (Chen, Wu, and Zhou 2025).