Exploiting Sample Uncertainty for Domain Adaptive Person Re-Identification
Authors: Kecheng Zheng, Cuiling Lan, Wenjun Zeng, Zhizheng Zhang, Zheng-Jun Zha3538-3546
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our methods using three person Re ID datasets, including Duke MTMC-re ID (Duke) (Ristani et al. 2016), Market-1501 (Market) (Zheng et al. 2015) and MSMT17 (Wei et al. 2018). We adopt mean average precision (m AP) and CMC Rank1/5/10 (R1/R5/R10) accuracy for evaluation. Table 1: Performance (%) comparison with the state-of-the-art methods for UDA person Re ID on the datasets of Duke MTMC-re ID, Market-1501 and MSMT17. We mark the results of the second best by underline and the best results by bold text. |
| Researcher Affiliation | Collaboration | Kecheng Zheng1*, Cuiling Lan2 , Wenjun Zeng2, Zhizheng Zhang1, Zheng-Jun Zha1 1 University of Science and Technology of China 2 Microsoft Research Asia {zkcys001,zhizheng}@mail.ustc.edu.cn, {culan, wezeng}@microsoft.com, zhazj@ustc.edu.cn |
| Pseudocode | No | The paper does not contain any pseudocode or a clearly labeled algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We evaluate our methods using three person Re ID datasets, including Duke MTMC-re ID (Duke) (Ristani et al. 2016), Market-1501 (Market) (Zheng et al. 2015) and MSMT17 (Wei et al. 2018). |
| Dataset Splits | No | The paper describes training and testing splits for datasets but does not explicitly mention a separate validation set split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Res Net50' and 'ADAM optimizer' but does not specify version numbers for any software dependencies or frameworks. |
| Experiment Setup | Yes | We use Res Net50 pretrained on Image Net as our backbone networks... For source pre-training, each mini-batch contains 64 images of 4 identities. For our fine-tuning stage, when the source data is also used, each mini-batch contains 64 source-domain images of 4 identities and 64 target-domain images of 4 pseudo identities, where there are 16 images for each identity. All images are resized to 256 128... we use the clustering algorithm of DBSCAN. For DBSCAN, the maximum distance between neighbors is set to eps = 0.6 and the minimal number of neighbors for a dense point is set to 4. ADAM optimizer is adopted. The initial learning rate is set to 0.00035. We set the weighting factors λtri = 1, λct = 0.05, and λreg = 1, where we determine them simply by making the several losses on the same order of magnitude. |