Deep Neural Networks with Inexact Matching for Person Re-Identification

Authors: Arulkumar Subramaniam, Moitreya Chatterjee, Anurag Mittal

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted experiments on the large CUHK03 dataset [3], the mid-sized CUHK01 Dataset [23], and the small QMUL GRID dataset [27]. The datasets are divided into training and test sets for our experiments. The goal of every algorithm is to rank images in the gallery image bank of the test set by their similarity to a probe image (which is also from the test set). All our experiments are conducted in the single shot setting, i.e. there is exactly one image of every person in the gallery image bank and the results averaged over 10 test trials are reported using tables and Cumulative Matching Characteristics (CMC) Curves (see supplementary). We also conducted an ablation study, to further analyze the contribution of the individual components of our model.
Researcher Affiliation Academia Arulkumar Subramaniam Indian Institute of Technology Madras Chennai, India 600036 aruls@cse.iitm.ac.in Moitreya Chatterjee Indian Institute of Technology Madras Chennai, India 600036 metro.smiles@gmail.com Anurag Mittal Indian Institute of Technology Madras Chennai, India 600036 amittal@cse.iitm.ac.in
Pseudocode No The paper describes the architecture and training algorithm in text and provides diagrams, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes The implementation was done in a machine with NVIDIA Titan GPUs and the code was implemented using Torch and is available online 1. 1https://github.com/InnovArul/personreid_normxcorr
Open Datasets Yes We conducted experiments on the large CUHK03 dataset [3], the mid-sized CUHK01 Dataset [23], and the small QMUL GRID dataset [27].
Dataset Splits Yes For our experiments, we follow the protocol used by Ahmed et al. [2] and randomly pick a set of 1260 identities for training and 100 for testing. We use 100 identities from the training set for validation.
Hardware Specification Yes The implementation was done in a machine with NVIDIA Titan GPUs and the code was implemented using Torch and is available online 1.
Software Dependencies No The paper states: "the code was implemented using Torch". However, it does not provide specific version numbers for Torch or any other software dependencies, which are required for reproducibility.
Experiment Setup Yes For all our experiments, we use a momentum of 0.9, starting learning rate of 0.05, learning rate decay of 1e-4, weight decay of 5e-4. For our models, we use mini-batch sizes of 128 and train our models for about 200,000 iterations. During fine-tuning, we use a learning rate of 0.001.