Unsupervised Embedding and Association Network for Multi-Object Tracking

Authors: Yu-Lei Li

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that UEANet confirms the outstanding ability to suppress IDS and achieves comparable performance compared with state-of-the-art methods on three MOT datasets.
Researcher Affiliation Academia Yu-Lei Li School of Informatics, Xiamen University yuleili2008@gmail.com
Pseudocode No The paper describes the methods using text and mathematical formulations but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing open-source code or a link to a code repository.
Open Datasets Yes We evaluate the proposed UEANet on the MOT2016 [Milan et al., 2016], MOT2017 [Milan et al., 2016] and MOT2020 [Dendorfer et al., 2020] datasets.
Dataset Splits Yes For ablation studies in Section 4.2, we follow the previous methods [Zhang et al., 2021b; Wang et al., 2021b; Wu et al., 2021] to use the first half of each video sequence of the MOT2017 training set for training while using the second half for validation.
Hardware Specification Yes We train the backbone [Zhang et al., 2021b], and the detection, identity embedding and data association branches of UEANet for 30 epochs with a learning rate of 1 × 10−4 and a mini-batch size of 24 on 4 RTX2080 Ti GPUs (using 2 RTX2080 Ti GPUs for ablation studies).
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., specific versions of Python, PyTorch, or TensorFlow).
Experiment Setup Yes We train the backbone [Zhang et al., 2021b], and the detection, identity embedding and data association branches of UEANet for 30 epochs with a learning rate of 1 × 10−4 and a mini-batch size of 24 on 4 RTX2080 Ti GPUs (using 2 RTX2080 Ti GPUs for ablation studies). ... The association confidence threshold β is 0.4 during inference.