Energy-based Backdoor Defense without Task-Specific Samples and Model Retraining

Authors: Yudong Gao, Honglong Chen, Peng Sun, Zhe Li, Junjian Li, Huajie Shao

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on multiple benchmark datasets demonstrate the superior performance of our methods over baselines in both backdoor detection and removal.
Researcher Affiliation Academia 1China University of Petroleum 2Hunan University 3College of William & Mary.
Pseudocode No No clearly labeled 'Algorithm' or 'Pseudocode' blocks are present in the paper.
Open Source Code Yes 1codes: https://github.com/ifen1/EBBA
Open Datasets Yes Dataset and DNN Selection. Following the settings in prior backdoor defenses (Guo et al., 2022; Shi et al., 2023), we conduct experiments on Cifar10 (Krizhevsky et al., 2009), GTSRB (Stallkamp et al., 2012) and Imagenet (Deng et al., 2009) (subset) datasets with Res Net18 (He et al., 2016).
Dataset Splits No The paper mentions 'Training/Testing Size' for datasets like Cifar10 and GTSRB, and 'training and test sets in a ratio of 4:1' for ESC-50, but does not explicitly state a separate validation split or its size/percentage for any dataset.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes We utilize the stochastic gradient descent (SGD) optimizer to train the backdoored model over a span of 200 epochs. The learning rate is established at 0.01, accompanied by a decay factor of 0.1 and decay intervals occurring at epochs 50, 100, and 150. A batch size of 64 is employed for the training process.