Django: Detecting Trojans in Object Detection Models via Gaussian Focus Calibration

Authors: Guangyu Shen, Siyuan Cheng, Guanhong Tao, Kaiyuan Zhang, Yingqi Liu, Shengwei An, Shiqing Ma, Xiangyu Zhang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate Django on 3 object detection image datasets, 3 model architectures, and 2 types of attacks, with a total of 168 models. Our experimental results show that Django outperforms 6 state-of-the-art baselines, with up to 38% accuracy improvement and 10x reduced overhead.
Researcher Affiliation Collaboration Guangyu Shen Purdue University West Lafayette, IN, 47907 shen447@purdue.edu Siyuan Cheng Purdue University West Lafayette, IN 47907 cheng535@purdue.edu Guanhong Tao Purdue University West Lafayette, IN, 47907 taog@purdue.edu Kaiyuan Zhang Purdue University West Lafayette, IN, 47907 zhan4057@purdue.edu Yingqi Liu Microsoft Redmond, Washington 98052 yingqiliu@microsoft.com Shengwei An Purdue University West Lafayette, IN, 47907 an93@purdue.edu Shiqing Ma University of Massachusetts at Amherst Amherst, MA, 01003 shiqingma@umass.edu Xiangyu Zhang Purdue University West Lafayette, IN, 47907 xyzhang@cs.purdue.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/Purdue PAML/DJGO.
Open Datasets Yes Our evaluation covers 3 existing object detection image datasets, including COCO [30], Synthesized Traffic Signs [1], and DOTA_v2 [11].
Dataset Splits Yes For meta classification based methods that involve training, we have performed 5-fold cross-validation and reported the validation results exclusively.
Hardware Specification Yes All the experiments are conducted on a server equipped with two Intel Xeon Silver 4214 2.40GHz 12-core processors, 192 GB of RAM, and a NVIDIA RTX A6000 GPU.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the experiments.
Experiment Setup Yes We initialize ˆµk = 0.1 and ˆσk = 2 in this paper. ... We determine the optimal threshold for the size of inverted triggers as the detection rule... We set a fixed number of optimization steps for scanning a pair of victim-target label (100) for all inversion based baselines. ... we evaluate the IoU threshold... Region Size... Score Threshold... (e.g., IoU thresholds of 0.3 and 0.5... a region size of 30 × 30... a score threshold of 0.5).