A Multi-Task Deep Network for Person Re-Identification

Authors: Weihua Chen, Xiaotang Chen, Jianguo Zhang, Kaiqi Huang

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the experiments, our approach outperforms most of existing person Re ID algorithms on representative datasets including CUHK03, CUHK01, VIPe R, i LIDS and PRID2011, which clearly demonstrates the effectiveness of the proposed approach.
Researcher Affiliation Academia Weihua Chen,1 Xiaotang Chen,1 Jianguo Zhang,3 Kaiqi Huang,1,2,4 1CRIPAC&NLPR, CASIA 2University of Chinese Academy of Sciences 3Computing, School of Science and Engineering, University of Dundee, United Kingdom 4CAS Center for Excellence in Brain Science and Intelligence Technology Email:{weihua.chen, xtchen, kqhuang}@nlpr.ia.ac.cn, j.n.zhang@dundee.ac.uk
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not explicitly state that the source code for their methodology is provided or linked.
Open Datasets Yes The large dataset is CUHK03 (Li et al. 2014), containing 13164 images from 1360 persons. [...] The four small datasets are CUHK01 (Li, Zhao, and Wang 2012), VIPe R (Gray, Brennan, and Tao 2007), i LIDS (Zheng, Gong, and Xiang 2009) and PRID2011 (Hirzer et al. 2011).
Dataset Splits Yes We randomly select 1160 persons for training, 100 persons for validation and 100 persons for testing, following exactly the same setting as (Li et al. 2014) and (Ahmed, Jones, and Marks 2015).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running experiments.
Software Dependencies No Our method is implemented using the Caffe framework (Jia et al. 2014). However, no specific version number for Caffe or other software dependencies is provided.
Experiment Setup Yes All images are resized to 224 224 before being fed to network. The learning rate is set to 10 3 consistently across all experiments. For all the datasets, we horizontally mirror each image and increase the dataset sizes fourfold. We use a pre-trained Alex Net model (trained on Imagenet dataset (Krizhevsky, Sutskever, and Hinton 2012)) to initialize the kernel weights of the first two convolutional layers.