DeHiB: Deep Hidden Backdoor Attack on Semi-supervised Learning via Adversarial Perturbation

Authors: Zhicong Yan, Gaolei Li, Yuan TIan, Jun Wu, Shenghong Li, Mingzhe Chen, H. Vincent Poor10585-10593

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments based on CIFAR10 and CIFAR100 datasets demonstrates the effectiveness and crypticity of the proposed scheme.
Researcher Affiliation Academia 1 Shanghai Jiao Tong University, Shanghai, China 2 Princeton University, Princeton, USA
Pseudocode Yes Algorithm 1: Generating poisoned data
Open Source Code No The paper mentions using 'an open-source Pytorch implementation of Fixmatch' but does not explicitly state that the code for the method described in this paper is open-source or provide a link.
Open Datasets Yes Extensive experiments based on CIFAR10 and CIFAR100 datasets demonstrates the effectiveness and crypticity of the proposed scheme. (Krizhevsky 2009)
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits for training/validation/test, or detailed splitting methodology) needed to reproduce the data partitioning. It mentions using training images but no explicit splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions 'Pytorch implementation' but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes We employ a standard set of hyperparameters across all experiments (λu = 1, initial learning rate η = 0.003, confidence threshold τ = 0.95, batch size B = 64). ... We use the , ϵ = 32 and perform PGD optimization for 1000 iterations with learning rate of 0.01 which decays every 200 iterations by 0.95.