Graph Consistency Based Mean-Teaching for Unsupervised Domain Adaptive Person Re-Identification

Authors: Xiaobin Liu, Shiliang Zhang

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on three datasets, i.e., Market-1501, Duke MTMCre ID, and MSMT17, show that proposed GCMT outperforms state-of-the-art methods by clear margin.
Researcher Affiliation Academia Xiaobin Liu, Shiliang Zhang Department of Computer Science, School of EECS, Peking University {xbliu.vmc, slzhang.jdl}@pku.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/liu-xb/GCMT .
Open Datasets Yes Experiments are performed on three datasets, i.e., Duke MTMC-re ID [Zheng et al., 2017], Market-1501 [Zheng et al., 2015], and MSMT17 [Wei et al., 2018].
Dataset Splits Yes Duke MTMC-re ID contains 36,411 images of 1,812 identities. 16,522 images of 702 identities are used for training. In the rest of images, 3,368 images are selected as query images and remaining 19,732 images are used as gallery images. Market-1501 contains 32,668 images of 1,501 identities. 12,936 images of 751 identities are selected for training. In the rest of images, 3,368 images are selected as query images and remaining 19,732 images are used as gallery images. MSMT17 contains 126,441 images of 4,101 identities. 32,621 images of 1,041 identities are selected for training. In the rest of images, 11,659 images are selected as query images and remaining 82,161 images are used as gallery images.
Hardware Specification Yes Models are trained on a server with three Ge Force RTX 2080 Ti GPUs and one Tesla V100 GPU.
Software Dependencies No The paper mentions 'Adam optimizer' and 'K-Means method' but does not specify any software or library names with version numbers.
Experiment Setup Yes Input images are resized to 256 128. We use random flipping, random cropping, and random erasing [Zhong et al., 2020] for data augmentation. K-Means method is used for unsupervised clustering. The number of clusters is set to 500 on Duke and Market and 1,500 on MSMT following [Ge et al., 2020a; Zhai et al., 2020]. In each training batch, 16 identities are randomly selected and 4 images for each identity are selected, resulting 64 images. K is set as 12 in teacher graph construction. Loss weight λGCC is set as 0.6. β is set to 0.05 following [Liu and Zhang, 2020]. Adam optimizer is used for training. Learning rate is initialized as 0.00035 and decayed by 0.1 after 20 epochs. Models are totally trained for 120 epochs with 400 iterations in each epoch.