Deep Attentive Tracking via Reciprocative Learning

Authors: Shi Pu, Yibing Song, Chao Ma, Honggang Zhang, Ming-Hsuan Yang

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on large-scale benchmark datasets show that the proposed attentive tracking method performs favorably against the state-of-the-art approaches.
Researcher Affiliation Collaboration 1Beijing University of Posts and Telecommunications, Beijing, China {pushi_519200, zhhg}@bupt.edu.cn 2Tencent AI Lab, Shenzhen, China dynamicstevenson@gmail.com 3Shanghai Jiao Tong University, Shanghai, China chaoma@sjtu.edu.cn 4University of California at Merced, Merced, U.S.A mhyang@ucmerced.edu
Pseudocode No The paper describes the proposed method and tracking process in detail but does not include any formal pseudocode blocks or algorithms.
Open Source Code No We present more experimental results in the supplementary materials, and will make the source code available to the public.
Open Datasets Yes Finally, we evaluate our method on the standard benchmarks, i.e., OTB-2013 [50], OTB-2015 [51] and VOT-2016 [28].
Dataset Splits No The paper describes online sampling and updates for training an online tracker but does not specify explicit training/validation/test dataset splits with percentages or counts for the benchmark datasets used for evaluation.
Hardware Specification Yes Our implementation is based on pytorch [37] and runs on a PC with an i7-3.4 GHz CPU and a Ge Force GTX 1080 GPU.
Software Dependencies No Our implementation is based on pytorch [37] and runs on a PC with an i7-3.4 GHz CPU and a Ge Force GTX 1080 GPU. The paper mentions 'pytorch' but does not specify a version number.
Experiment Setup Yes In the first frame, the number N1 of samples is set to 5500. We train the randomly initialized classifier using H1 = 50 iterations with a learning rate of 2e-4. In each iteration, we feed 1 mini-batch containing 32 positive and 32 negative samples into the network. In the online model update step, we fine-tune the classifier using H2 = 15 iterations in every T = 10 frames with a learning rate of 3e-4. The network solver is stochastic gradient descent (SGD). During online detection, the number N2 of proposals is set to 256. We set λ between 0 to 8 at an interval of 1 to evaluate the tracking performance on the OTB-2013 dataset. In the following experiments, we fix λ = 5 to report our tracking results.