Learning Incremental Triplet Margin for Person Re-Identification
Authors: Yingying Zhang, Qiaoyong Zhong, Liang Ma, Di Xie, Shiliang Pu9243-9250
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on Market-1501, CUHK03, and Duke MTMCre ID show that our approach yields a performance boost and outperforms most existing state-of-the-art methods. |
| Researcher Affiliation | Industry | Yingying Zhang, Qiaoyong Zhong, Liang Ma, Di Xie, Shiliang Pu Hikvision Research Institute {zhangyingying7,zhongqiaoyong,maliang6,xiedi,pushiliang}@hikvision.com |
| Pseudocode | Yes | Algorithm 1 Global Hard Identity Searching. |
| Open Source Code | No | The paper does not provide a specific link or explicit statement about releasing the source code for the described methodology. |
| Open Datasets | Yes | We evaluate the proposed approach on three large-scale person Re ID datasets, namely Market-1501 (Zheng et al. 2015), CUHK03 (Li et al. 2014) and Duke MTMC-re ID (Ristani et al. 2016; Zheng, Zheng, and Yang 2017). |
| Dataset Splits | No | The paper specifies training and testing splits for datasets (e.g., '12,936 images from 751 identities (including 1 background category) for training and 19,732 images from 750 identities for testing' for Market-1501), but does not explicitly mention or detail a separate validation set split. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Adam optimizer' but does not provide specific version numbers for these or any other software dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | The training images are randomly cropped with a ratio uniformly sampled from [0.8, 1) and resized to 288 × 144. Random erasing (Zhong et al. 2017b) and random flipping are applied on resized images with a probability of 0.5. The hyper-parameters of random erasing data augmentation are set the same as (Zhong et al. 2017b). The number of persons P per-batch and number of images per-person K are set to 20 and 4 respectively. Hence, the mini-batch size is 80. For LITM, the base and incremental margins are set as m0 = 4, m1 = 7, m2 = 10. We use the Adam optimizer (Kingma and Ba 2014) with ϵ = 10 3, β1 = 0.99 and β2 = 0.999. The network is trained for 300 epochs in total. And a piecewise learning rate schedule is utilized, where it is fixed to 2 × 10 4 in the first 150 epochs and decayed exponentially in the rest 150 epochs. |