Color-Sensitive Person Re-Identification

Authors: Guan'an Wang, Yang Yang, Jian Cheng, Jinqiao Wang, Zengguang Hou

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive evaluations on two benchmark datasets show that our approach significantly outperforms state-of-the-art Re-ID models.
Researcher Affiliation Academia 1The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China 2National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China 3University of Chinese Academy of Sciences, Beijing, China 4Center for Excellence in Brain Science and Intelligence Technology, Beijing, China
Pseudocode No The paper describes the model and training process using mathematical equations and textual descriptions, but it does not contain explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing source code or provide a link to a code repository.
Open Datasets Yes Market-1501 [Zheng et al., 2015b] contains 32,669 annotated images of 1,501 identities from 6 cameras. Duke MTMC-re ID [Ristani et al., 2016] includes 16,522 training images of 702 identities. The color attributes of both datasets are annotated by [Lin et al., 2017].
Dataset Splits No The paper mentions 'training images' for Duke MTMC-re ID and discusses batch sizes for training, and refers to a 'test stage', but it does not explicitly specify the training/test/validation dataset splits needed for reproduction, nor does it explicitly mention a validation set.
Hardware Specification No The paper does not explicitly describe the hardware (e.g., specific GPU or CPU models, memory specifications) used to run the experiments.
Software Dependencies No The paper mentions 'Pytorch' as an implementation framework, but it does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes The training images are augmented with horizontal flip, random cropping, random erasing [Zhong et al., 2017] and normalization. The batch size of real images is set to 192 (24 persons and 8 images for a person), and that of fake images is set to 112 (4 persons, randomly select 7 fake images for a person). We initialize the learning rates of the CNN part at 0.05 and the other parts (classifiers and embedders) at 0.5. The learning rates are decayed by 0.1 every 4,000 iterations, and the model is trained for 15,000 iterations in total. We set margin m = 0.1 empirically.