Multi-Centroid Representation Network for Domain Adaptive Person Re-ID

Authors: Yuhang Wu, Tengteng Huang, Haotian Yao, Chi Zhang, Yuanjie Shao, Chuchu Han, Changxin Gao, Nong Sang2750-2758

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the superiority of MCRN over state-of-the-art approaches on multiple UDA re-ID tasks and fully unsupervised re-ID tasks. Sections like "Experiments", "Ablation Studies", and "Comparison with the State-of-the-Arts" include performance tables (Table 1, 5, 6) showing empirical results.
Researcher Affiliation Collaboration 1Key Laboratory of Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology 2Megvii Technology
Pseudocode No The paper describes mechanisms and mathematical formulations, but it does not include a dedicated "Pseudocode" or "Algorithm" block, figure, or section with structured code-like steps.
Open Source Code No The paper does not include an unambiguous statement about releasing its source code for the methodology described, nor does it provide a direct link to a code repository.
Open Datasets Yes We evaluate our method on three person re-ID datasets, including Market-1501 (Zheng et al. 2015), Duke MTMC-re ID (Ristani et al. 2016) and MSMT17 (Wei et al. 2018).
Dataset Splits No The paper mentions evaluating on different datasets (source and target domains for UDA re-ID) but does not provide explicit numerical training, validation, and test dataset splits (e.g., percentages or counts) for reproducibility of data partitioning. It describes mini-batch sampling but not overall dataset splits.
Hardware Specification Yes We implement our approach using the Pytorch (Paszke et al. 2019) framework and use four NVIDIA RTX-2080TI GPUs for training.
Software Dependencies No The paper mentions "Pytorch (Paszke et al. 2019) framework" but does not specify a version number for PyTorch or any other software dependencies like CUDA, Python, or specific libraries with their versions.
Experiment Setup Yes Each mini-batch consists of 64 source domain images and 64 target domain images, with 4 images per ground-truth/pseudo class (i.e., K is set to 4). All training images are resized to 256 × 128 and various data augmentations are applied, including random cropping, random flipping and random erasing (Zhong et al. 2020). Adam optimizer is utilized to optimize the encoder with a weight decay of 0.0005. The initial learning rate is set to 0.00035 and is decayed by 1/10 every 20 epochs in the total 50 epochs. The momentum coefficient m in Equation 2 is set to 0.2, and the temperature coefficient τ in the contrastive losses is set to 0.05. α in SONI is set to 0.03.