SECRET: Self-Consistent Pseudo Label Refinement for Unsupervised Domain Adaptive Person Re-identification
Authors: Tao He, Leqi Shen, Yuchen Guo, Guiguang Ding, Zhenhua Guo879-887
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmark datasets show the superiority of our method. Specifically, our method outperforms the state-ofthe-arts by 6.3% in terms of m AP on the challenging dataset MSMT17. In the purely unsupervised setting, our method also surpasses existing works by a large margin. |
| Researcher Affiliation | Collaboration | Tao He*1,2, Leqi Shen*1,2, Yuchen Guo 2, Guiguang Ding 1,2, Zhenhua Guo3 1 School of Software, Tsinghua University, Beijing, China 2 Beijing National Research Center for Information Science and Technology (BNRist) 3 Alibaba Group |
| Pseudocode | Yes | Algorithm 1: Mutual refinement of pseudo labels Algorithm 2: Noisy instance elimination |
| Open Source Code | Yes | Code is available at https://github.com/Lunar Shen/SECRET. |
| Open Datasets | Yes | The proposed SECRET is evaluated on the popular benchmark datasets: Market-1501 (Zheng et al. 2015), Duke MTMC-re ID (Ristani et al. 2016) and MSMT17 (Wei et al. 2018). |
| Dataset Splits | No | The paper discusses training on source domains and fine-tuning on target domains for unsupervised domain adaptation. It mentions evaluation on benchmark datasets (Market-1501, Duke MTMC-re ID, MSMT17). However, it does not explicitly describe a separate validation dataset split with specific percentages or counts used during the training process for hyperparameter tuning or early stopping, distinct from the final test/evaluation set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions using "Res Net-50 as our backbone," which is a model architecture, but it does not specify any software dependencies (e.g., programming languages, deep learning frameworks like PyTorch or TensorFlow, or other libraries) along with their version numbers. |
| Experiment Setup | Yes | The input images are resized to 256 × 128. Random flip, padding, and random crop are used as data augmentation in both source domain pre-training and target domain fine-tuning. Random erase (Zhong et al. 2020a) is only used in target domain finetuning. We randomly sample 4 instances per ground truth (in pre-training) or pseudo label (in fine-tuning) in a mini-batch, resulting in batch size 64. In pre-training, the initial learning rate is set to 3.5 × 10−4, and decays by 0.1 at 40 and 70 epoch, and 80 epochs in total. In fine-tuning, clustering-andpseudo-label-fine-tuning runs 80 epochs in total. The learning rate is set to 3.5 × 10−4. The hyper-parameters K in filtering pseudo labels of global and local features is set to be 40%. |