Multi-Task Driven Feature Models for Thermal Infrared Tracking

Authors: Qiao Liu, Xin Li, Zhenyu He, Nana Fan, Di Yuan, Wei Liu, Yongsheng Liang11604-11611

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on three benchmarks show that the proposed algorithm achieves a relative gain of 10% over the baseline and performs favorably against the stateof-the-art methods. Codes and the proposed TIR dataset are available at https://github.com/Qiao Liu Hit/MMNet.
Researcher Affiliation Academia 1Harbin Institute of Technology, Shenzhen 2Shenzhen Institute of Information Technology 3Peng Cheng Laboratory
Pseudocode No The paper includes network architecture diagrams (Figure 1, Figure 2) but no pseudocode or algorithm blocks.
Open Source Code Yes Codes and the proposed TIR dataset are available at https://github.com/Qiao Liu Hit/MMNet.
Open Datasets Yes Codes and the proposed TIR dataset are available at https://github.com/Qiao Liu Hit/MMNet. We first train the proposed network on the VID2015 (Russakovsky et al. 2015) grayscale dataset with a multi-task loss:
Dataset Splits No The paper describes training on VID2015 and their constructed TIR dataset with specific epochs and learning rates, and mentions
Hardware Specification Yes We conduct the experiment using the Mat Conv Net (Vedaldi and Lenc 2015) toolbox on a PC with an i7 4.0 GHz CPU and a GTX-1080 GPU.
Software Dependencies No The paper mentions using "Mat Conv Net (Vedaldi and Lenc 2015) toolbox" but does not specify a version number for it or any other software libraries or dependencies.
Experiment Setup Yes We train the proposed network using a Stochastic Gradient Descent (SGD) method with the batch size of 8 and momentum of 0.9. At the first stage, we train the network with 60 epochs on the VID2015 dataset and the learning rate exponentially decays from 10 2 to 10 5. We set λ1 = λ2 = λ3 = 1 of Eq. 10 at all training stages. At the re-training and finetuning stages, we train the network 30 epochs with the learning rate exponentially decays from 10 3 to 10 5 on the constructed TIR dataset. In the mix-training process, we train the network 70 epochs using the same parameters with the training on VID2015 dataset.