Sampling-Resilient Multi-Object Tracking
Authors: Zepeng Li, Dongxiang Zhang, Sai Wu, Mingli Song, Gang Chen
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three benchmark datasets show that our proposed tracker achieves the best trade-off between efficiency and accuracy. |
| Researcher Affiliation | Academia | 1 The State Key Laboratory of Blockchain and Data Security, Zhejiang University 2 College of Computer Science and Technology, Zhejiang University |
| Pseudocode | No | The paper describes the proposed methods using textual explanations and mathematical equations, but it does not include any pseudocode blocks, algorithms, or flowcharts labeled as such. |
| Open Source Code | No | The paper mentions using YOLOX provided by previous trackers and comparing against 'open-sourced trackers,' but it does not provide an explicit statement about releasing its own source code for the proposed SR-Track methodology or a link to such a repository. |
| Open Datasets | Yes | We use three benchmark datasets for performance evaluation, including MOT17 (Milan et al. 2016), MOT20 (Dendorfer et al. 2020) and Dance Track (Sun et al. 2022). |
| Dataset Splits | Yes | Dance Track is a recent dataset proposed to emphasize the importance of motion analysis. ... It provides 100 videos and the split ratio for training, validation and test dataset is 40 : 25 : 35. |
| Hardware Specification | Yes | All the experiments are conducted using Py Torch and ran on a desktop with 10th Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz and NVIDIA Ge Force RTX 3090Ti GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' as the framework used for experiments but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | As to our proposed Kalman filter, we set hidden size to 128 for the LSTM network and adopt two-layer Bayesian neural network to implement Q and R. All models are trained using the Adam optimizer for 100 epochs with a batch size of 32. The initial learning rate is set to 0.01 and linearly decayed to 0.0001. |