Revisiting the Assumption of Latent Separability for Backdoor Defenses

Authors: Xiangyu Qi, Tinghao Xie, Yiming Li, Saeed Mahloujifar, Prateek Mittal

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark datasets verify the effectiveness of our adaptive attacks in bypassing existing latent separation based defenses. Our codes are available at https: //github.com/Unispac/Circumventing-Backdoor-Defenses.
Researcher Affiliation Academia Xiangyu Qi1 , Tinghao Xie1 , Yiming Li2, Saeed Mahloujifar1, Prateek Mittal1 1Princeton University 2Tsinghua Shenzhen International Graduate School, Tsinghua University {xiangyuqi,thx,sfar,pmittal}@princeton.edu; li-ym18@mails.tsinghua.edu.cn
Pseudocode No The paper describes the design of the attacks and a generic framework (Figure 2) but does not include explicit pseudocode or an algorithm block.
Open Source Code Yes Our codes are available at https: //github.com/Unispac/Circumventing-Backdoor-Defenses.
Open Datasets Yes CIFAR-10 (Krizhevsky, 2012), GTSRB (Stallkamp et al., 2012) and a 10-classes subset of Imagenet (Russakovsky et al., 2015).
Dataset Splits No The paper mentions 'Detailed configurations on dataset split and training details of base models are deferred to Appendix A.' but Appendix A.2 only details training epoch numbers and learning rates for different datasets. It does not provide specific percentages or counts for training, validation, or test splits. It implicitly uses standard splits by referring to standard datasets like CIFAR-10.
Hardware Specification Yes All of our experiments are conducted on a workstation with 48 Intel Xeon Silver 4214 CPU cores, 384 GB RAM, and 8 Ge Force RTX 2080 Ti GPUs.
Software Dependencies No The paper mentions 'SGD' and specific hyperparameters but does not provide specific version numbers for software libraries or frameworks (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes SGD with a momentum of 0.9, a weight decay of 10 4, and a batch size of 128, is used for optimization. Initially, we set the learning rate to 0.1. On CIFAR-10, we follow the standard 200 epochs stochastic gradient descent procedure, and the learning rate will be multiplied by a factor of 0.1 at the epochs of 100 and 150.