Target-Aware Tracking with Long-Term Context Attention

Authors: Kaijie He, Canlong Zhang, Sheng Xie, Zhixin Li, Zhiwen Wang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our tracker achieves state-of-the-art performance on multiple benchmarks, with 71.1% AUC, 89.3% NP, and 73.0% AO on La SOT, Tracking Net, and GOT-10k. The code and trained models are available on https://github.com/hekaijie123/TATrack. Experiments Implementation Details Our tracker was implemented on Python 3.9 and pytorch 1.11.0, trained on 2 Tesla A100 GPUs. The different sizes of TATrack are shown in Tab. 1, on Pa E and SWA modules, TATrack-S, TATrack-B, and TATrack-L are loaded with pre-training weights of Swin-Tiny, Swin-Base, and Swin-Base384, respectively. We used Tracking Net, La SOT, COCO and GOT-10k multiple training sets for joint training. Comparison with the State-of-the-Art Trackers GOT-10k. Tracking Net. La SOT. Ablation Study and Analysis We did ablation experiments on the components of TATrack and we analyzed the contribution of each separable component in the model.
Researcher Affiliation Academia Canlong Zhang1,2*, Sheng Xie1, Zhixin Li1,2, Zhiwen Wang3 1School of Computer Science and Engineering, Guangxi Normal University, China 2Guangxi Key Lab of Multi-source Information Mining and Security, China 3School of Computer Science and Technology, Guangxi University of Science and Technology, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures), only descriptive text and mathematical formulas.
Open Source Code Yes The code and trained models are available on https://github.com/hekaijie123/TATrack.
Open Datasets Yes We used Tracking Net, La SOT, COCO and GOT-10k multiple training sets for joint training.
Dataset Splits No The paper mentions training, testing, and evaluation on datasets but does not explicitly provide specific numerical dataset split information (e.g., percentages or sample counts for training, validation, and test sets) or refer to predefined standard splits with citations that define these ratios. It only refers to using training sets for joint training and test sequences for evaluation.
Hardware Specification Yes Our tracker was implemented on Python 3.9 and pytorch 1.11.0, trained on 2 Tesla A100 GPUs.
Software Dependencies Yes Our tracker was implemented on Python 3.9 and pytorch 1.11.0
Experiment Setup Yes For the weights λcls is set to 1.5 and λgiou is set to 1.5. Our tracker was implemented on Python 3.9 and pytorch 1.11.0, trained on 2 Tesla A100 GPUs. The different sizes of TATrack are shown in Tab. 1, on Pa E and SWA modules, TATrack-S, TATrack-B, and TATrack-L are loaded with pre-training weights of Swin-Tiny, Swin-Base, and Swin-Base384, respectively.