Multi-Rate Gated Recurrent Convolutional Networks for Video-Based Pedestrian Re-Identification

Authors: Zhihui Li, Lina Yao, Feiping Nie, Dingwen Zhang, Min Xu

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on the i LIDS-VID and PRID-2011 datasets, and our experimental results confirm the effectiveness and the generalization ability of our model.
Researcher Affiliation Collaboration Zhihui Li,1 Lina Yao,2 Feiping Nie,3 Dingwen Zhang,4 Min Xu5 1Beijing Etrol Technologies Co., Ltd. 2School of Computer Science and Engineering, University of New South Wales. 3Centre for OPTical Imagery Analysis and Learning, Northwestern Polytechnical University. 4School of Automation, Northwestern Polytechnical University. 5School of Electrical and Data Engineering, University of Technology Sydney.
Pseudocode No The paper describes the architecture and mathematical formulations of the model, but does not include any explicitly labeled 'Algorithm' or 'Pseudocode' blocks.
Open Source Code No We will release our code and trained models upon acceptance.
Open Datasets Yes i LIDS-VID dataset (Wang et al. 2014). The i LIDS-VID dataset consists of 600 image sequences of 300 distinct individuals. PRID 2011 dataset (Hirzer et al. 2011). The PRID 2011 dataset contains 400 image sequences of 200 randomly sampled people from two cameras.
Dataset Splits No we randomly split each dataset into 50% of persons for training and 50% of persons for testing for all experiments. The paper mentions only training and testing splits, with no explicit mention of a separate validation split.
Hardware Specification Yes We train the network using an Nvidia Titan X Pascal with 12GB memory.
Software Dependencies No We implement the proposed model using the framework released by (Mc Laughlin, del Rinc on, and Miller 2016) based on Torch. The paper mentions 'Torch' but does not provide specific version numbers for software dependencies.
Experiment Setup Yes The network was trained using stochastic gradient decent with a learning rate of 1e-3, and a batch size of 1, and the input to the Siamese network is alternated between positive and negative sequence pairs, as in (Mc Laughlin, del Rinc on, and Miller 2016). We train the network for 500 epochs.