Domain Adaptive Attention Learning for Unsupervised Person Re-Identification
Authors: Yangru Huang, Peixi Peng, Yi Jin, Yidong Li, Junliang Xing11069-11076
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the Market-1501, Duke MTMC-re ID and MSMT17 benchmarks demonstrate the proposed approach outperforms the state-of-the-arts. |
| Researcher Affiliation | Academia | 1School of Computer and Information Technology, Beijing Jiaotong University, China 2Institute of Automation, Chinese Academy of Sciences, China 3Institute of Information Engineering, Chinese Academy of Sciences, China |
| Pseudocode | Yes | Alg. 1 concludes the proposed learning method. Algorithm 1: The proposed learning algorithm. |
| Open Source Code | No | The paper does not provide any concrete access to source code, such as a specific repository link or an explicit statement of code release in supplementary materials. |
| Open Datasets | Yes | Experiment Datasets Market-1501 (Zheng et al. 2015) contains 32, 668 images of 1, 501 identities captured by 6 camera views. Duke MTMC-re ID (Ristani et al. 2016) consists of 36,411 images of 1,812 persons from 8 high-resolution cameras. MSMT17 (Wei et al. 2018) is a larger and more challenging dataset collected with 12 outdoor cameras and 3 indoor cameras during 4 days. |
| Dataset Splits | No | The paper describes specific training and testing set sizes for each dataset (e.g., 'the whole dataset is divided into a training set containing 12,936 images of 751 identities and a testing set containing 19,732 images of 750 identities' for Market-1501), but it does not specify a separate validation split. |
| Hardware Specification | No | The paper mentions that 'The code is implemented on Pytorch' but does not provide specific hardware details such as GPU models, CPU types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper states 'The code is implemented on Pytorch' but does not provide specific version numbers for PyTorch or any other software dependencies needed to replicate the experiment. |
| Experiment Setup | Yes | All the results are achieved under the single-query model without Re-Ranking (Zhong et al. 2017a) refinement for fair comparison. The parameters of Res Net-50 (He et al. 2016) are pre-trained on Image Net, and other network parameters are all initialized randomly. The code is implemented on Pytorch and all images are resized to 384 128. Similar to (Zhong et al. 2019; Fu et al. ), we perform random flipping, random cropping and random erasing (Zhong et al. 2017b) for data augmentation in training. The stochastic gradient descent with a momentum of 0.9 is adopted. At each iteration of Alg. 1, the learning rate is set to 1.5 10 4 for Res Net-50 base layers and 3 10 5 for other layers in the first 20 epoches. The learning rate drops with 0.1 for every 60 epochs. The training at each iteration lasts for 260 epochs and the minibatch is composed of 32 images. During testing, the domain-shared features are used for matching. |