Self-Supervised Multi-Object Tracking with Cross-input Consistency

Authors: Favyen Bastani, Songtao He, Samuel Madden

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our unsupervised method on MOT17 and KITTI remarkably, we find that, despite training only on unlabeled video, our unsupervised approach outperforms four supervised methods published in the last 1–2 years, including Tracktor++ [1], FAMNet [5], GSM [18], and mm MOT [29].
Researcher Affiliation Academia Favyen Bastani MIT CSAIL favyen@csail.mit.edu Songtao He MIT CSAIL songtao@csail.mit.edu Sam Madden MIT CSAIL madden@csail.mit.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://favyen.com/uns20/.
Open Datasets Yes We evaluate our approach on the MOT17 and KITTI benchmarks against 9 baselines... For MOT17, we collect unlabeled video from two sources: we use five hours of video from seven You Tube walking tours, and all train and test sequences from the Path Track dataset [20] (we do not use the Path Track ground truth annotations). For KITTI, we use both the 46 minutes of video in the KITTI dataset together with 7 hours of video from Berkeley Deep Drive [27].
Dataset Splits No The paper specifies training and testing splits for MOT17 and KITTI datasets, but it does not explicitly provide information about a validation dataset split (e.g., percentages or counts).
Hardware Specification Yes We train our tracker model on an NVIDIA Tesla V100 GPU; training time varies between 4 and 24 hours depending on the input-hiding scheme.
Software Dependencies No The paper mentions using a YOLOv5 model and the Adam optimizer but does not provide specific version numbers for these or other software dependencies, such as programming languages or libraries.
Experiment Setup Yes During training, we randomly select sequence lengths n between 4 and 16 frames, and apply stochastic gradient descent one sequence at a time. We apply the Adam optimizer with learning rate 0.0001, decaying to 0.00001 after plateau.