A Bottom-Up Clustering Approach to Unsupervised Person Re-Identification
Authors: Yutian Lin, Xuanyi Dong, Liang Zheng, Yan Yan, Yi Yang8738-8745
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on the large-scale image and video re-ID datasets, including Market-1501, Duke MTMCre ID, MARS and Duke MTMC-Video Re ID. The experimental results demonstrate that our algorithm is not only superior to state-of-the-art unsupervised re-ID approaches, but also performs favorably than competing transfer learning and semi-supervised learning methods. |
| Researcher Affiliation | Academia | 1CAI, University of Technology Sydney, 2Australian National University 3Department of Computer Science, Texas State University |
| Pseudocode | Yes | Algorithm 1 The Bottom-Up Clustering (BUC) Framework |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of the described methodology. |
| Open Datasets | Yes | Market-1501 (Zheng et al. 2015) is a large-scale dataset for person re-ID captured by 6 cameras in a university campus. It contains 12,936 images of 751 identities for training and 19,732 images of 750 identities for testing. Duke MTMC-re ID (Zheng, Zheng, and Yang 2015) is a large-scale re-ID dataset derived from the Duke MTMC dataset (Ristani et al. 2016). MARS (Zheng et al. 2016) is a large-scale video-based dataset for person re-ID... Duke MTMC-Video Re ID (Wu et al. 2018a) is a large-scale video-based re-ID dataset derived from the Duke MTMC dataset (Ristani et al. 2016). We also conduct image classification experiments on CIFAR-10 (Krizhevsky and Hinton 2009). |
| Dataset Splits | Yes | Evaluate on the validation set performance P |
| Hardware Specification | Yes | On Market-1501 and Duke MTMC-re ID, it takes about 4 hours to finish the training procedure with a GTX 1080TI GPU. |
| Software Dependencies | No | The paper mentions using 'ResNet-50' and 'ImageNet' and optimization methods like 'stochastic gradient descent' but does not specify software dependencies with version numbers (e.g., PyTorch 1.x, Python 3.x). |
| Experiment Setup | Yes | We adopt Res Net-50 as the CNN backbone to conduct all the experiments. We initialize it by the Image Net (Krizhevsky, Sutskever, and Hinton 2012) pretrained model with the last classification layer removed. For all the experiments if not specified, we set the number of training epochs in the first stage to be 20, the batch size to be 16, the dropout rate to be 0.5, mp to be 0.05 and λ in Eq. (6) to be 0.005. We use stochastic gradient descent with a momentum of 0.9 to optimize the model. The learning rate is initialized to 0.1 and changed to 0.01 after 15 epochs. |