Robust Joint Discriminative Feature Learning for Visual Tracking

Authors: Xiangyuan Lan, Shengping Zhang, Pong C. Yuen

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiment results on challenging videos show that the the proposed tracker performs favourably against other ten state-of-the-art trackers. In this section, we report the experimental results of the proposed tracker quantitatively and qualitatively.
Researcher Affiliation Academia Xiangyuan Lan, Shengping Zhang, Pong C. Yuen Department of Computer Science, Hong Kong Baptist University {xylan, csspzhang, pcyuen}@comp.hkbu.edu.hk
Pseudocode Yes Algorithm 1: Optimization Algorithm for (5)... Algorithm 2: Solver for {Xk, Ek}-subproblem
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the described methodology.
Open Datasets No The paper evaluates on 'fifteen sequences' but does not provide concrete access (link, DOI, or specific citation of the dataset source) for these sequences. It only lists the names of the videos in the results tables.
Dataset Splits No The paper describes how training samples are collected dynamically: 'The training samples Y k(k = 1, ..., K) consists of the tracking results of the initial 5 and recent 10 frames, and 10 background samples in the current frame.' However, it does not provide explicit training, validation, and test dataset splits with percentages or counts for a static dataset.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using HOG and covariance descriptors, but it does not specify any software names with version numbers (e.g., Python 3.x, specific library versions, or commercial software with version numbers).
Experiment Setup Yes We empirically set 1, 2, λ1, and λ2 to be 0.25, 0.25, 0.1, 0.01, respectively. All k(k = 1, ..., K) are set to be 1. The training samples Y k(k = 1, ..., K) consists of the tracking results of the initial 5 and recent 10 frames, and 10 background samples in the current frame.