Spatial and Temporal Mutual Promotion for Video-Based Person Re-Identification

Authors: Yiheng Liu, Zhenxun Yuan, Wengang Zhou, Houqiang Li8786-8793

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on three challenging datasets, i.e., i LIDS-VID, PRID2011 and MARS. The experimental results demonstrate that our approach outperforms existing state-of-the-art methods of video-based person re-identification on i LIDS-VID and MARS and achieves favorable results on PRID-2011.
Researcher Affiliation Academia Yiheng Liu,1 Zhenxun Yuan,2 Wengang Zhou,1 Houqiang Li1 1CAS Key Laboratory of Technology in GIPAS, EEIS Department, University of Science and Technology of China 2School of Electrical and Computer Engineering, Purdue University lyh156@mail.ustc.edu.cn, yuan141@purdue.edu, {zhwg,lihq}@ustc.edu.cn
Pseudocode No The paper describes the proposed methods using figures and mathematical equations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes The code will be made publicly available1. 1https://github.com/yolomax/rru-reid
Open Datasets Yes We evaluate our method on three public video datasets for person re-identification including i LIDS-VID (Wang et al. 2014), PRID-2011 (Hirzer et al. 2011) and MARS (Zheng et al. 2016).
Dataset Splits Yes i LIDS-VID and PRID-2011 datasets are randomly split into two sets with the same number of pedestrians for training and testing... We use the average CMC table over 10 trials with different train/test splits... Following (Zheng et al. 2016), the partition for training and testing set in MARS dataset is given.
Hardware Specification No The paper does not specify the exact hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using "Inception-v3 model" and "stochastic gradient descent" but does not specify any software libraries, frameworks, or their version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes During training process, we set N = 10, K = 2, T = 8 and m = 0.4. The dropout rate in classifier block is set to 0.5. Network is updated by stochastic gradient descent algorithm with a learning rate of 0.01, weight decay of 5 10 4 and nesterov momentum of 0.9. For the pretrained layers, the learning rate is set to 0.1 of the base learning rate.