Influencer Backdoor Attack on Semantic Segmentation

Authors: Haoheng Lan, Jindong Gu, Philip Torr, Hengshuang Zhao

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments verify that a class of a segmentation model can suffer from both near and far backdoor triggers, and demonstrate the real-world applicability of IBA. The code is available at https://github.com/Maxproto/IBA.git.
Researcher Affiliation Academia 1Dartmouth College 2University of Oxford 3The University of Hong Kong
Pseudocode Yes Algorithm 1 Nearest Neighbor Injection Require: Mask Y clean, Victim pixels vp, Lower Bound L, Upper Bound U Ainject non-victim pixels Y clean nvp initialize a distance map Mdis for p in Ainject do if L Distance(p, Xvp) U then p 1 , and Mdis = Distance(p, Avictim) elsep 0 return Eligible Injection Area Ainject, Distance Map Mdis
Open Source Code Yes The code is available at https://github.com/Maxproto/IBA.git.
Open Datasets Yes The PASCAL VOC 2012 (VOC) (Everingham et al., 2010) dataset includes 21 classes... The Cityscapes (Cordts et, 2016) dataset is a popular dataset that describes complex urban street scenes.
Dataset Splits Yes The validation and test set contains 1,499, and 1,456 images, respectively. ... The size of training, validation, and test set is 2975, 500, and 1525, respectively.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions models like PSPNet, Deep Lab V3, and Seg Former, and backbones like ResNet-50, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes When poisoning training samples with NNI, the upper bound U of the neighbor area is set to 30 on VOC and 60 for Cityscapes, and the lower bound L is all 0. For PRL, the number of pixels being relabeled is set to 50000 for both 2 datasets. The trigger size is set to 15 15 pixels for the VOC dataset and 55 55 for the Cityscapes dataset. All training images from the Cityscapes dataset were rescaled to a shape of 512 1024 prior to the experiments.