Anti-Backdoor Learning: Training Clean Models on Poisoned Data

Authors: Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, Xingjun Ma

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments on multiple benchmark datasets against 10 state-of-the-art attacks, we empirically show that ABL-trained models on backdoor-poisoned data achieve the same performance as they were trained on purely clean data.
Researcher Affiliation Collaboration Yige Li Xidian University yglee@stu.xidian.edu.cn Xixiang Lyu Xidian University xxlv@mail.xidian.edu.cn Nodens Koren University of Copenhagen nodens.f.koren@di.ku.dk Lingjuan Lyu Sony AI Lingjuan.Lv@sony.com Bo Li University of Illinois at Urbana Champaign lbo@illinois.edu Xingjun Ma Fudan University danxjma@gmail
Pseudocode No The paper describes the ABL method in detail using text and mathematical equations, but it does not include a structured pseudocode or algorithm block.
Open Source Code Yes Code is available at https://github.com/bboylyg/ABL.
Open Datasets Yes All attacks are evaluated on three benchmark datasets, CIFAR-10 [40], GTSRB [41] and an Image Net subset [42]
Dataset Splits No The paper mentions evaluating on test sets and exploring different turning epochs, which would typically involve a validation set, but it does not explicitly state the dataset split percentages or sizes for training, validation, or test sets.
Hardware Specification No The paper does not provide specific details regarding the hardware used for experiments, such as GPU models, CPU types, or cloud computing specifications.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the experiments.
Experiment Setup Yes For our ABL, we set T = 100, Tte = 20, γ = 0.5 and an isolation rate p = 0.01 (1%) in all experiments. The exploration of different Tte, γ, and isolation rates p are also provided in Section 4.1. Three data augmentation techniques suggested in [10] including random crop (padding = 4), horizontal flipping, and cutout, are applied for all defense methods.