Model Uncertainty Guides Visual Object Tracking
Authors: Lijun Zhou, Antoine Ledent, Qintao Hu, Ting Liu, Jianlin Zhang, Marius Kloft3581-3589
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on the La SOT, UAV123, OTB100 and VOT2018 benchmarks demonstrate that our UATracker outperforms state-of-the-art real-time trackers by significant margins.1 Experiments Implementation Details All the experiments were carried out with Pytorch on an Intel i5-8600k 3.4GHz CPU and a single Nvidia GTX 1080ti GPU with 24GB memory. The UATracker was implemented based on the Dimp architecture (Bhat et al. 2019), by using Res Net50 + DCNV 2 (He et al. 2016; Zhu et al. 2019) as backbone. We choose 10 as the size of the time intervals. All experiments reported are the average of multiple runs: VOT is the average of 15 runs, whilst OTB, UAV123 and La So T are the average of 5 runs. |
| Researcher Affiliation | Collaboration | 1Alibaba Group, Hangzhou, China 2 Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu, China 3University of Chinese Academy of Sciences, Beijing, China 4Department of Computer Science, TU Kaiserslautern, Kaiserslautern, Germany |
| Pseudocode | No | The paper does not contain pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The code is available at. github.com/Tracker LB/UATracker |
| Open Datasets | Yes | Experiments on the La SOT, UAV123, OTB100 and VOT2018 benchmarks demonstrate that our UATracker outperforms state-of-the-art real-time trackers by significant margins. (Fan et al. 2019), (Mueller, Smith, and Ghanem 2016), (Wu, Lim, and Yang 2015), (Kristan et al. 2018) |
| Dataset Splits | No | The paper mentions running experiments multiple times and using 'previous sampled frames as training samples' for online learning, but does not explicitly describe train/validation/test dataset splits with specific percentages, counts, or predefined splits. |
| Hardware Specification | Yes | All the experiments were carried out with Pytorch on an Intel i5-8600k 3.4GHz CPU and a single Nvidia GTX 1080ti GPU with 24GB memory. |
| Software Dependencies | No | The paper mentions 'Pytorch' but does not specify its version number or any other software dependencies with version numbers. |
| Experiment Setup | Yes | All the experiments were carried out with Pytorch on an Intel i5-8600k 3.4GHz CPU and a single Nvidia GTX 1080ti GPU with 24GB memory. The UATracker was implemented based on the Dimp architecture (Bhat et al. 2019), by using Res Net50 + DCNV 2 (He et al. 2016; Zhu et al. 2019) as backbone. We choose 10 as the size of the time intervals. All experiments reported are the average of multiple runs: VOT is the average of 15 runs, whilst OTB, UAV123 and La So T are the average of 5 runs. When the maximum score in the response score map of the current target is less than 0.2 times the maximum score in the first frame, we conclude that the target may have been lost and accordingly expand the search area to retrieve it. |