Unsupervised Domain Adaptation for Person Re-identification via Heterogeneous Graph Alignment

Authors: Minying Zhang, Kai Liu, Yidong Li, Shihui Guo, Hongtao Duan, Yimin Long, Yi Jin3360-3368

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark datasets demonstrate that the proposed approach outperforms the state-of-the-arts. and Experiments In this section, we conduct sufficient ablation studies to prove the effectiveness of each component in HGA. Then, we compare the performance of proposed HGA with other state-of-the-art unsupervised domain adaptation person re ID methods to show superiority.
Researcher Affiliation Collaboration Minying Zhang,1 Kai Liu,1,2 Yidong Li,2 Shihui Guo,3 Hongtao Duan,1 Yimin Long,1 Yi Jin2 1 Alibaba Group 2 Beijing Jiaotong University 3 Xiamen University
Pseudocode Yes Algorithm 1: The proposed HGA framework.
Open Source Code No No explicit statement about providing open-source code or a link to a repository was found.
Open Datasets Yes We evaluate our method on three person re-ID benchmark datasets, i.e. Market1501 (Zheng et al. 2015), Duke MTMCRe ID (Ristani et al. 2016; Zheng, Zheng, and Yang 2017) and MSMT17 (Wei et al. 2018), which are considered as large scale in the community.
Dataset Splits No The paper mentions training and testing but does not explicitly describe a validation dataset split or its use for hyperparameter tuning. It only refers to 'training' and 'testing'.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments. It only mentions the backbone network used.
Software Dependencies No All codes are implemented on Pytorch. The paper mentions PyTorch but does not specify a version number for it or any other software dependencies.
Experiment Setup Yes The learning rate is initialized to 0.01 and divided by 10 for every 40 epochs. We set the batch size equal to 128 for both training and testing. ... the training process lasts for 100 epochs, including 20 epochs for coarse-grained alignment and another 80 epochs for fine-grained alignment. It also specifies Number of appearance groups K; Minimal samples Smin for HDBSCAN. and discusses their values in parameter analysis.