BadTrack: A Poison-Only Backdoor Attack on Visual Object Tracking
Authors: Bin Huang, Jiaqian Yu, Yiwei Chen, Siyang Pan, Qiang Wang, Zhi Wang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally show that our backdoor attack can significantly degrade the performance of both two-stream Siamese and one-stream Transformer trackers on the poisoned data while gaining comparable performance with the benign trackers on the clean data. |
| Researcher Affiliation | Collaboration | 1 Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China 2 Samsung Research China-Beijing, Beijing, China |
| Pseudocode | No | The paper does not contain any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Siam RPN++. Our experiments are based on the open-sourced codes 3. OSTrack. Our experiments are based on the open-sourced codes 4. (Footnote 3: https://github.com/STVIR/pysot, Footnote 4: https://github.com/botaoye/OSTrack) |
| Open Datasets | Yes | The Siam RPN++ tracker is trained on COCO [19], Image Net DET [25], Image Net VID [25] and You Tube-Bounding Boxes [24] datasets... The OSTrack tracker is trained on COCO [19], La SOT [7], GOT10k [11] and Tracking Net [22] datasets... |
| Dataset Splits | Yes | The Siam RPN++ tracker is trained on COCO [19], Image Net DET [25], Image Net VID [25] and You Tube-Bounding Boxes [24] datasets... for OSTrack, we choose three datasets for evaluation, i.e. La SOT [7], La SOT extension (La SOText) [8], GOT10k [11] (validation set) for OSTrack... |
| Hardware Specification | Yes | Experiments are conducted on 4 NVIDIA A100 GPUs. |
| Software Dependencies | No | The paper mentions optimizers like 'SGD optimizer' and 'Adam W optimizer' but does not provide specific version numbers for software libraries, frameworks, or programming languages (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | The Siam RPN++ tracker is trained... for 20 epochs with a batch size of 28. An SGD optimizer with momentum 0.9, weight decay of 5 × 10−4 and an initial learning rate of 0.005 is adopted. A log learning rate scheduler with a final learning rate of 0.0005 is used. There is also a learning rate warm-up strategy for the first 5 epochs. The OSTrack tracker is trained... for 300 epochs with a batch size of 32. An Adam W optimizer with weight decay of 1 × 10−4 and an initial learning rate of 0.0004 is adopted. The learning rate is scaled to 0.1 times when the epochs reach to 240. |