A Unified Multi-Scenario Attacking Network for Visual Object Tracking
Authors: Xuesong Chen, Canmiao Fu, Feng Zheng, Yong Zhao, Hongsheng Li, Ping Luo, Guo-Jun Qi1097-1104
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that UEN is able to attack many state-of-the-art trackers effectively (e.g. Siam RPN-based networks and Di MP) on popular tracking datasets including OTB100, UAV123, and GOT10K, making online real-time attacks possible. The attack results outperform the introduced baseline in terms of attacking ability and attacking efficiency. |
| Researcher Affiliation | Collaboration | Xuesong Chen1, Canmiao Fu2, Feng Zheng 4, Yong Zhao3, Hongsheng Li1, Ping Luo5, Guo-Jun Qi6 1The Chinese University of Hong Kong 2We Chat AI, Tencent 3Peking University 4Depatment of Computer Science and Engineering, Southern University of Science and Technology 5The University of Hong Kong 6Laboratory for MAPLE, Futurewei Technologies |
| Pseudocode | No | The paper provides architectural diagrams and mathematical equations for loss functions, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or include links to code repositories for the described methodology. |
| Open Datasets | Yes | Datasets We choose 3 popular benchmarks as our experimental datasets, including OTB100 (Wu, Lim, and Yang 2013), UAV123 (Mueller, Smith, and Ghanem 2016) and GOT10K (Huang, Zhao, and Huang 2019). Besides, for Siam RPN-based trackers, we employ COCO (Lin et al. 2014) as the training dataset for NA and TA, while the Re ID dataset MSMT17 (Wei et al. 2018) is transformed for the training of adversarial patch attacks in real-world. For Di MP, we use GOT10K as our training datasets. |
| Dataset Splits | No | The paper specifies datasets used for training (COCO, MSMT17, GOT10K for Di MP) and mentions 'In training and testing', but it does not provide explicit details about train/validation/test splits (e.g., percentages or sample counts) for these datasets. |
| Hardware Specification | Yes | The cost time (on NVIDIA P40 GPU) |
| Software Dependencies | No | The paper mentions using an 'Adam optimizer and cosine decay scheduler' but does not specify version numbers for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow, CUDA). |
| Experiment Setup | Yes | We train the generator for 20 epochs, employing an Adam optimizer and cosine decay scheduler with the initial value of 5e-3. For hyper-parameter, we set α = 10, M = 5. |