Online Tracking by Learning Discriminative Saliency Map with Convolutional Neural Network

Authors: Seunghoon Hong, Tackgeun You, Suha Kwak, Bohyung Han

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We verify the effectiveness of our tracking algorithm through extensive experiment on a challenging benchmark, where our method illustrates outstanding performance compared to the state-of-the-art tracking algorithms. This section describes our implementation details and experimental setting. The effectiveness of our tracking algorithm is then demonstrated by quantitative and qualitative analysis on a large number of benchmark sequences.
Researcher Affiliation Collaboration Seunghoon Hong1 MAGA33@POSTECH.AC.KR Tackgeun You1 YOUTK@POSTECH.AC.KR Suha Kwak2 SUHA.KWAK@INRIA.FR Bohyung Han1 BHHAN@POSTECH.AC.KR 1Dept. of Computer Science and Engineering, POSTECH, Pohang, Korea 2Inria WILLOW Project, Paris, France
Pseudocode No The paper describes its algorithm in prose and uses mathematical equations, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a link or statement confirming the availability of its own source code for the methodology described. It mentions using 'Caffe: An open source convolutional architecture' and using 'available source code to reproduce the results' from other papers, but not its own.
Open Datasets Yes To evaluate the performance, we employ all 50 sequences from the recently released tracking benchmark dataset (Wu et al., 2013).
Dataset Splits No The paper does not provide specific training/validation/test dataset splits (e.g., percentages or counts). It describes how training examples for the online SVM are generated during tracking but not a predefined dataset partitioning for training, validation, and testing.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU or CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions using the 'Caffe library (Jia, 2013)' but does not provide a specific version number for it or any other software dependencies.
Experiment Setup Yes The CNN takes an image from sample bounding box, which is resized to 227 227, and outputs a 4096-dimensional vector from its first fullyconnected (fc6) layer as a feature vector corresponding to the sample. To generate target candidates in each frame, we draw N(= 120) samples... the threshold δ in Eq. (16) is set to 0.3. The number of observations m used to build generative model in Eq. (13) is set to 30. All parameters are fixed for all sequences throughout our experiment.