Release the Power of Online-Training for Robust Visual Tracking
Authors: Yifan Yang, Guorong Li, Yuankai Qi, QIngming Huang12645-12652
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on the challenging large-scale OTB2015 and UAVDT demonstrate the outstanding performance of our tracking method. ... Experiments In this section, we first present the implementation details. Then we evaluate our method on two standard benchmarks: OTB-2015 (Wu, Lim, and Yang 2015) dataset and UAVDT (Du et al. 2018) dataset. To analyze the effectiveness of each opponent in our method, we conduct ablation studies from four perspectives:... |
| Researcher Affiliation | Academia | Yifan Yang,1 Guorong Li, 1,3 Yuankai Qi,2 Qingming Huang, 1,3,4 1School of Computer and Control Engineering, University of Chinese Academy of Sciences, Beijing, China. 2Harbin Institute of Technology, Weihai, China. 3Key Laboratory of Big Data Mining and Knowledge Management, CAS, Beijing, China 4Key Laboratory of Intell. Info. Process. (IIP), Inst. of Computi. Tech., CAS, China. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about the release of their source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We evaluate our method on two standard benchmarks: OTB-2015 (Wu, Lim, and Yang 2015) dataset and UAVDT (Du et al. 2018) dataset. |
| Dataset Splits | No | The paper mentions using training, validation, and test sets generally but does not provide specific details on the dataset splits (e.g., percentages or sample counts for train/validation/test splits). |
| Hardware Specification | Yes | The implementation runs on an Intel Core i7-6700 3.4GHz CPU with 12GB of RAM and a GIGABYTE GTX 1080 Ti GPU with 11GB of VRAM, and the average speed is 1.0 FPS. |
| Software Dependencies | No | The paper states 'We implement our tracker in Python using Pytorch (Paszke, Gross, and Lerer 2017) library,' but does not provide specific version numbers for Python or PyTorch. |
| Experiment Setup | Yes | The learning rate of class centers is 2e-2, λ1 is set to 1e-3, and λ2 is set to 1e-2. ... We significantly reduce the scale of training set: MDNet possesses 3000 online-training samples; we maintain only 150 samples. ... Features of the third layer are used to estimate the global data redundancy, and the statistic-based losses are added to the fifth layer. |