Learning Robust Gaussian Process Regression for Visual Tracking

Authors: Linyu Zheng, Ming Tang, Jinqiao Wang

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments are performed on two public datasets: OTB-2013 and OTB-2015. Without bells and whistles, on these two datasets, our GPRT obtains 84.1% and 79.2% in mean overlap precision, respectively, outperforming all the existing trackers with hand-crafted features.
Researcher Affiliation Academia University of Chinese Academy of Sciences, Beijing, China National Lab of Pattern Recognition, Institute of Automation, CAS, Beijing 100190, China
Pseudocode Yes Algorithm 1 Proposed tracking framework.
Open Source Code No We will release code to facilitate future research.
Open Datasets Yes Experiments are performed on two public datasets: OTB-2013 [Wu et al., 2013] and OTB-2015 [Wu et al., 2015].
Dataset Splits No The paper discusses 'training data' and 'test samples' in the context of online learning and detection, but it does not specify explicit train/validation/test dataset splits, percentages, or a cross-validation setup for evaluation.
Hardware Specification Yes The experiments are performed on Linux with Intel E5-2673 2.4GHz CPU and single TITAN X GPU with CUDA-8.0.
Software Dependencies Yes Our GPRTE and GPRT are both implemented under MATLAB and C++. The experiments are performed on Linux with Intel E5-2673 2.4GHz CPU and single TITAN X GPU with CUDA-8.0.
Experiment Setup Yes We set the learning rate δ in section 3.2 to 0.004 and 0.007 for our GPRTE and GPRT, respectively. Meanwhile, we set the learning and search ratio σ in section 3.2 to 4 for both GPRTE and GPRT. For accuracy and speed, we resize the target in the first frame to ensure the minimum and maximum area are 1000 and 4000 pixels, respectively... In gaussian kernel, we set σf = 1.0 and σn = 0.01... we set it to 1.4 on all sequences.