Clean-image Backdoor: Attacking Multi-label Models with Poisoned Labels Only

Authors: Kangjie Chen, Xiaoxuan Lou, Guowen Xu, Jiwei Li, Tianwei Zhang

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results indicate that our clean-image backdoor can achieve a 98% attack success rate while preserving the model s functionality on the benign inputs.
Researcher Affiliation Collaboration Kangjie Chen1, Xiaoxuan Lou1( ), Guowen Xu1, Jiwei Li2,3, Tianwei Zhang1 1Nanyang Technological University, 2Zhejiang University, 3Shannon.AI
Pseudocode Yes Algorithm 1 Trigger Selection" and "Algorithm 2 Label Poisoning" in Appendix B.
Open Source Code No The paper mentions using "an open-source NLP library Neural Classifier (Tencent, 2019)" but does not provide a link or explicit statement about the release of their own code for the described methodology.
Open Datasets Yes Without loss of generality, we select the most three popular benchmark datasets (Pascal-VOC 2007, VOC 2012 (Everingham et al., 2010) and MS-COCO (Lin et al., 2014)) for the multi-label classification task.
Dataset Splits No The paper mentions using a "clean validation set" for early stopping but does not specify the exact percentages or sample counts for the training, validation, and test splits.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, or other computer specifications used for running the experiments.
Software Dependencies No The paper mentions specific models like ML-Decoder and ML-GCN and a library like Neural Classifier, but does not provide version numbers for these or any other software dependencies.
Experiment Setup No The paper provides general experimental details such as dataset usage, model types, and poisoning rates, but does not explicitly list specific hyperparameters (e.g., learning rate, batch size, number of epochs) or detailed training configurations in a dedicated setup section.