Tracklet Self-Supervised Learning for Unsupervised Person Re-Identification
Authors: Guile Wu, Xiatian Zhu, Shaogang Gong12362-12369
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the superiority of TSSL over a wide variety of the stateof-the-art alternative methods on four large-scale person reid benchmarks, including Market-1501, Duke MTMC-Re ID, MARS and Duke MTMC-Video Re ID. |
| Researcher Affiliation | Collaboration | 1Queen Mary University of London, 2Vision Semantics Limited |
| Pseudocode | Yes | Algorithm 1 Tracklet Self-Supervised Learning. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We aim at optimising a feature embedding space for both image and video unsupervised re-id, so we also evaluated both image (Market-1501 (Zheng et al. 2015) and Duke MTMCRe ID (Ristani et al. 2016; Zheng, Zheng, and Yang 2017)) and video (MARS (Zheng et al. 2016) and Duke MTMCVideo Re ID (Ristani et al. 2016; Wu et al. 2018a)) datasets. |
| Dataset Splits | No | The paper does not explicitly mention a validation split, nor does it detail the percentages or counts for training, validation, and test sets. It mentions a "maximal training epoch" but no specific validation process for hyperparameter tuning or early stopping. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions "We used Res Net-50 (He et al. 2016) (pre-trained on Image Net) as the feature embedding network." but does not provide version numbers for ResNet or any other software libraries or dependencies. |
| Experiment Setup | Yes | We empirically set α = 2 for Eq. (4), η = 0.5 for Eq. (6), λ = 0.1 and s = 10 for Eq. (5), τ = 0.1 for Eq. (7), δ = 0.05. We set Nk = 4 for cluster merging. The maximal training epoch was set to 20 for the first step and to 5 for the remaining steps. We used Stochastic Gradient Descent (SGD) as the optimiser with the initial learning rate at 0.01 for the backbone model and a decay of 0.1 after 15 training epochs. |