Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Poisoning-based Backdoor Attacks for Arbitrary Target Label with Positive Triggers

Authors: Binxiao Huang, Ngai Wong

IJCAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments under both dirtylabel and clean-label settings, we demonstrate empirically that the proposed attack achieves a high attack success rate without sacrificing accuracy across various datasets, including SVHN, CIFAR10, GTSRB, and Tiny Image Net. Additionally, the PPT attack can elude a variety of classical backdoor defenses, proving its effectiveness.
Researcher Affiliation Academia Binxiao Huang , Ngai Wong The University of Hong Kong EMAIL, EMAIL
Pseudocode No We present the details of the PPT algorithm in supplementary A.4. The paper mentions an algorithm in supplementary material but does not include the pseudocode or algorithm block within the main text provided.
Open Source Code No The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository.
Open Datasets Yes Datasets: Following the previous backdoor attack papers, we performed comprehensive experiments on four widely-used datasets: SVHN [Netzer et al., 2011], CIFAR10 [Krizhevsky et al., 2009], GTSRB [Stallkamp et al., 2012], and Tiny Image Net [Le and Yang, 2015].
Dataset Splits No The paper mentions 'poisoning rates' of 1% and 10% for the poisoned data but does not explicitly provide the training, test, or validation splits for the base datasets themselves (e.g., standard percentages or counts).
Hardware Specification Yes All experiments were conducted on one Nvidia RTX 3090 GPU.
Software Dependencies No The paper mentions using an 'SGD optimizer with the Cross-Entropy (CE) loss' but does not specify any software libraries or their version numbers.
Experiment Setup Yes We train the trigger generator and classifier for 300 epochs with a batch size of 128 utilizing a SGD optimizer with the Cross-Entropy (CE) loss. The initial learning rate was set to 1 10 2, which decayed to one-tenth after 100 and 200 epochs, respectively. Following the settings of Marksman, the maximum l norm-bounded perturbation ̈ was set to 0.05 for all datasets.