Real-Time Object Tracking via Meta-Learning: Efficient Model Adaptation and One-Shot Channel Pruning

Authors: Ilchae Jung, Kihyun You, Hyeonwoo Noh, Minsu Cho, Bohyung Han11205-11212

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental evaluation on the standard datasets demonstrates its outstanding accuracy and speed compared to the state-of-the-art methods.
Researcher Affiliation Academia Ilchae Jung,1,2 Kihyun You,1 Hyeonwoo Noh,1,2 Minsu Cho,1 Bohyung Han2 1Computer Vision Lab., POSTECH, Korea 2Computer Vision Lab. ECE, & ASRI, Seoul National University, Korea {chey0313, kihyun13, shgusdngogo, mscho}@postech.ac.kr, bhhan@snu.ac.kr
Pseudocode Yes Algorithm 1 Meta-Learning for Fast Adaptation
Open Source Code No The paper does not contain any explicit statements about releasing source code or links to a code repository for the described methodology.
Open Datasets Yes We pretrain Meta RTT and Meta RTT+Prune on Image Net Vid (Russakovsky et al. 2015), which contains more than 3,000 videos with 30 object classes labeled for video object detection. [...] We pretrain Meta RTT+COCO on Image Net-Vid and the augmented version of COCO (Lin et al. 2014).
Dataset Splits Yes We randomly select 6 frames from a single video to construct an episode, and use the first frame for Dinit, the last frame for Dstd test and the remaining frames for Don.
Hardware Specification Yes Our algorithm is implemented in Py Torch with 3.60 GHz Intel Core I7-6850k and NVIDIA Titan Xp Pascal GPU.
Software Dependencies No The paper mentions 'Py Torch' but does not specify a version number for this or any other software dependency.
Experiment Setup Yes We set Kinit and Kon to 5 throughout our experiment. The meta-parameters are optimized over 40K simulated episodes using ADAM with fixed learning rate 10 4. [...] We optimize the network by ADAM for 30K iterations with learning rate 5 10 5.