UCMCTrack: Multi-Object Tracking with Uniform Camera Motion Compensation

Authors: Kefu Yi, Kai Luo, Xiaolei Luo, Jiangui Huang, Hao Wu, Rongdong Hu, Wei Hao

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted a fair evaluation of UCMCTrack on multiple publicly available datasets, including MOT17 (Milan et al. 2016), MOT20 (Dendorfer et al. 2020), Dance Track (Sun et al. 2022), and KITTI (Geiger et al. 2013). Ablation Studies on UCMC
Researcher Affiliation Collaboration 1School of Traffic and Transportation, Changsha University of Science and Technology 2College of Automotive and Mechanical Engineering, Changsha University of Science and Technology 3Changsha Intelligent Driving Institute
Pseudocode Yes For the pseudocode please refer to Appendix A.
Open Source Code Yes More details and code are available at https://github.com/ corfyi/UCMCTrack.
Open Datasets Yes We conducted a fair evaluation of UCMCTrack on multiple publicly available datasets, including MOT17 (Milan et al. 2016), MOT20 (Dendorfer et al. 2020), Dance Track (Sun et al. 2022), and KITTI (Geiger et al. 2013).
Dataset Splits Yes For MOT17, the validation set was split following the prevailing conventions (Zhou, Koltun, and Kr ahenb uhl 2020).
Hardware Specification No The paper mentions running at 1000 FPS using "just a single CPU", but does not provide specific hardware details (like CPU model, GPU, or memory) used for training or running experiments.
Software Dependencies No The paper mentions using YOLOX and Byte Track, but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup No The paper describes general implementation details such as the object detection method and weight files used, and the camera motion compensation model, but does not provide specific hyperparameters (e.g., learning rate, batch size, number of epochs) or detailed training configurations.