Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

MIMTrack: In-Context Tracking via Masked Image Modeling

Authors: Xingmei Wang, Guohao Nie, Jiaxiang Meng, Zining Yan

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness of MIMTrack, which achieved an AO score of 75.3% on GOT10k, surpassing Seq Track, OSTrack, etc. Table 1: Tracking results on four popular benchmarks: GOT-10k, Tracking Net, La SOT, La SOText and UAV123.
Researcher Affiliation Academia Xingmei Wang1, Guohao Nie1*, Jiaxiang Meng1*, Zining Yan2 1College of Computer Science and Technology, Harbin Engineering University 2College of Design and Engineering, National University of Singapore EMAIL, zn EMAIL
Pseudocode No The paper describes the methodology and framework with text and equations, but it does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information to source code, such as a repository link or an explicit statement about code release.
Open Datasets Yes The training dataset consists of training splits from COCO (Lin et al. 2014), La SOT (Fan et al. 2019), GOT-10k (Huang, Zhao, and Huang 2019), and Tracking Net (Muller et al. 2018).
Dataset Splits Yes The training dataset consists of training splits from COCO (Lin et al. 2014), La SOT (Fan et al. 2019), GOT-10k (Huang, Zhao, and Huang 2019), and Tracking Net (Muller et al. 2018). According to official protocol, we have trained our MIMTrack only on the GOT-10k training split.
Hardware Specification Yes MIMTrack is trained and tested on a single 3090 GPU using Python 3.8 and Py Torch 1.11.0.
Software Dependencies Yes MIMTrack is trained and tested on a single 3090 GPU using Python 3.8 and Py Torch 1.11.0.
Experiment Setup Yes The optimizer is Adam W (Loshchilov and Hutter 2017). The learning rate and batch size are set to 1e 4 and 16, respectively. Our model is trained with 500 epochs and 60k matching pairs per epoch. After 400 iterations, the learning rate is reduced by 10 times. ... The update period and threshold are 4 and 0.7, respectively.