Asymmetric Co-Teaching for Unsupervised Cross-Domain Person Re-Identification
Authors: Fengxiang Yang, Ke Li, Zhun Zhong, Zhiming Luo, Xing Sun, Hao Cheng, Xiaowei Guo, Feiyue Huang, Rongrong Ji, Shaozi Li12597-12604
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that the proposed framework can consistently benefit most clustering based methods, and boost the state-of-the-art adaptation accuracy. |
| Researcher Affiliation | Collaboration | 1Artificial Intelligence Department, Xiamen University, China 2Post Doctoral Mobile Station of Information and Communication Engineering, Xiamen University, China 3Tencent Youtu Lab, Shanghai, China |
| Pseudocode | Yes | Algorithm 1 Procedure of the proposed method. Inputs: Labeled source dataset S, unlabeled target dataset T , Image Net pre-trained model M. Training epochs e1, e2 and e3. Maximum round r2, r3. Outputs: Adapted model Mada. |
| Open Source Code | Yes | Our code is available at https://github.com/Flying Roast Duck/ACT AAAI20. |
| Open Datasets | Yes | We conduct experiments on three large-scale benchmark datasets: Market-1501 (Zheng et al. 2015), Duke MTMCre ID (Ristani et al. 2016; Zheng, Zheng, and Yang 2017) and CUHK03 (Li et al. 2014). |
| Dataset Splits | No | We conduct experiments on three large-scale benchmark datasets: Market-1501 (Zheng et al. 2015), Duke MTMCre ID (Ristani et al. 2016; Zheng, Zheng, and Yang 2017) and CUHK03 (Li et al. 2014). The m AP and rank-1 accuracy are adopted as evaluation metrics. We use the new-protocol proposed in (Zhong et al. 2017) for evaluating CUHK03. |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, memory) used for running experiments are mentioned in the paper. |
| Software Dependencies | No | The paper mentions "Image Net pre-trained Res Net-50 model" but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Adam solver is used to optimize the re-ID model with an initial learning rate of 3 10 4. We train re-ID model for 150 epochs and the learning rate is linearly decreased to 0 for the last 50 epochs. Margin m in the triplet loss is set to 0.3. Training batch size Bs = 64. Input images are resized to 128 64. We also use random flip and random erasing (Zhong et al. 2020) for data argumentation. In the clustering-based adaptation stage, we constrain the minimum size of a cluster to 4 and set density radius p = 1.6 10 3. After a clustering step, we train the model for 30 epochs, and iterate this procedure for 30 rounds. In the last asymmetric co-teaching stage, we form triplet samples in a batch to compute triplet loss for each anchor image. Anchors with the smallest K% losses are selected for further training. We set the small loss ratio K = 20% and linearly increase it to 100% for the whole Rco epochs, Rco = 10. Adam is used to fine-tune the models for 10 epoch with a fixed learning rate of 6 10 5. |