Feature Integration with Adaptive Importance Maps for Visual Tracking

Authors: Aishi Li, Ming Yang, Wanqi Yang

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate the effectiveness of proposed approach, we implement CFWFI based on handcrafted features and Deep CFWFI which is achieved by adding CNN features to CFWFI. Then we evaluate our trackers on the OTB13 and OTB15 [Wu et al., 2013; 2015] benchmark datasets and compare them with some state-of-the-art methods.
Researcher Affiliation Academia Aishi Li, Ming Yang , Wanqi Yang Nanjing Normal University liamgsal@gmail.com, myang@njnu.edu.cn, yangwq@njnu.edu.cn
Pseudocode Yes Algorithm 1 CFWFI tracking algorithm
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes We evaluate our methods on OTB13 and OTB15 [Wu et al., 2013; 2015] datasets.
Dataset Splits No The paper uses standard benchmark datasets (OTB13, OTB15) for evaluation and describes online model updates. It does not specify traditional train/validation/test splits from a single dataset for offline training purposes.
Hardware Specification Yes Our Matlab R2014a implementation runs on an Intel i5 CPU PC with 4 GB memory. Moreover, following Deep SRDCF [Danelljan et al., 2015a], we add 96 channel features from the initial convolutional layer to CFWFI to implement Deep CFWFI. Our Matlab R2015b implementation runs on an Intel i7 CPU PC and TITAN X.
Software Dependencies Yes Our Matlab R2014a implementation runs on an Intel i5 CPU PC with 4 GB memory. Our Matlab R2015b implementation runs on an Intel i7 CPU PC and TITAN X.
Experiment Setup Yes HOG features use 4*4 cell size to extract from an image patch. The area of image patch used to extract features is proportional to the area of the target bounding box. We set the region area to 5*5 times the target area and make it square. ... To learn and detect quickly, the maximum sample size is set to 50*50. Similar to SAMF’s way [Li and Zhu, 2014] to deal with the scale variations, the number of scales is set to 5 with a scale-step of 1.01. The target output of correlation filter is a 2D Gaussian shaped response with the standard deviation of wh/16... The regularization factor λ of the filter is set to 0.01. In addition, the regularization factor α for importance maps is [0.5, 0.01]. ... The learning rate η of updating the model is 0.013. Parameters of the alternating direction multiplier method are set to µ = 1 and β = 10 where the former is the penalty factor and the latter is the update rate. We empirically find that ADMM with 2 iterations can achieve the solution close to the optimal, thus set the number of iteration to 2.