Deep View-Aware Metric Learning for Person Re-Identification

Authors: Pu Chen, Xinyi Xu, Cheng Deng

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiment results on datasets CUHK01, CUHK03, and PRID2011 demonstrate the superiority of our method compared with state-of-the-art approaches.
Researcher Affiliation Academia School of Electronic Engineering, Xidian University, Xi an 710071, China puchen@stu.xidian.edu.cn, xyxu.xd@gmail.com, chdeng@mail.xidian.edu.cn
Pseudocode Yes Algorithm 1 Back Propagation gradient
Open Source Code No The paper does not provide any concrete access information (e.g., specific repository link, explicit code release statement) for the source code of the described methodology.
Open Datasets Yes We evaluate our method on three datasets, CUHK03 [Li et al., 2014], CUHK01 [Li et al., 2012] and PRID2011 [Hirzer et al., 2011].
Dataset Splits Yes CUHK03 dataset contains 13164 images from 1360 persons. We select 1160 persons for training, 100 for validation and 100 for testing following the same setting as [Li et al., 2014] and [Ahmed et al., 2015].
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types) used for running its experiments.
Software Dependencies No The paper mentions general components like 'ReLU layer' and 'Batch Normalization (BN) layer' and references to CNN architectures, but does not provide specific software names with version numbers.
Experiment Setup Yes All the images are resized to 128 64 before being fed to the network and the batchsize of the input is 64. For identity classifier, we first pretrain a model on CUHK03 and the learning rate is set to 0.001, then finetune this model with learning rate 0.002. We picked a set of optimal loss weights α1 = 0.4, α2 = 0.4, α3 = 0.2 experimentally. And all the margin parameters β1, β2, β3 are set to 1 [Song et al., 2016].