Person Re-Identification by Deep Joint Learning of Multi-Loss Classification
Authors: Wei Li, Xiatian Zhu, Shaogang Gong
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive comparative evaluations demonstrate the advantages of this new JLML model for person re-id over a wide range of state-of-the-art re-id methods on four benchmarks (VIPe R, GRID, CUHK03, Market-1501). |
| Researcher Affiliation | Academia | Queen Mary University of London, London E1 4NS, UK |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link to its open-source code. |
| Open Datasets | Yes | For evaluation, we used four benchmarking re-id datasets, VIPe R [Gray and Tao, 2008], GRID [Loy et al., 2009], CUHK03 [Li et al., 2014], and Market-1501 [Zheng et al., 2015]. |
| Dataset Splits | Yes | On VIPe R, we split randomly the whole population (632 people) into two halves: One for training (316) and another for testing (316). We repeated 10 trials of random people splits and used the averaged results. On GRID, the training/test split was 125/125 with 775 distractor people included in the test gallery. We used the benchmarking 10 people splits [Loy et al., 2009] and the averaged performance. On CUHK03, following [Li et al., 2014] we repeated 20 times of random 1260/100 training/test splits and reported the averaged accuracies under the single-shot evaluation setting. On Market-1501, we used the standard training/test split (750/751) [Zheng et al., 2015]. |
| Hardware Specification | No | The paper does not specify the hardware used for experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper mentions using the "Caffe framework [Jia et al., 2014]" but does not provide specific version numbers for Caffe or any other software dependencies. |
| Experiment Setup | Yes | Table 4: JLML training parameters. BLR: base learning rate; LRP: learning rate policy; MOT: momentum; IT: iteration; BS: batch size. [...] We also adopted the stepped learning rate policy, e.g. dropping the learning rate by a factor of 10 every 100K iterations for JLML pre-training and every 20K iterations for JLML training. |