Semantics-Aligned Representation Learning for Person Re-Identification

Authors: Xin Jin, Cuiling Lan, Wenjun Zeng, Guoqiang Wei, Zhibo Chen11173-11180

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Ablation studies demonstrate the effectiveness of our design. We achieve the state-of-the-art performances on the benchmark datasets CUHK03, Market1501, MSMT17, and the partial person re ID dataset Partial REID.
Researcher Affiliation Collaboration Xin Jin,1 Cuiling Lan,2 Wenjun Zeng,2 Guoqiang Wei,1 Zhibo Chen1 University of Science and Technology of China1 Microsoft Research Asia2 {jinxustc, wgq7441}@mail.ustc.edu.cn, {culan, wezeng}@microsoft.com, chenzhibo@ustc.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link for the release of source code for the described methodology.
Open Datasets Yes We conduct experiments on six benchmark person re ID datasets, including CUHK03 (Li et al. 2014), Market1501 (Zheng, Shen, and others 2015), Duke MTMC-re ID (Zheng, Zheng, and Yang 2017), the large-scale MSMT17 (Wei, Zhang, and others 2018), and two challenging partial person re ID datasets of Partial REID (Zheng et al. 2015) and Partial-i LIDS (He et al. 2018).
Dataset Splits No The paper mentions using benchmark datasets and following "common practices" but does not explicitly provide specific train/validation/test dataset splits with percentages, sample counts, or direct references to how these splits were configured.
Hardware Specification No The paper only mentions "a single GPU" for training without specifying the exact model or any other specific hardware components (CPU, RAM, etc.) used for experiments.
Software Dependencies No The paper mentions using ResNet-50 but does not provide specific software dependencies or library versions (e.g., Python, PyTorch/TensorFlow, CUDA versions) needed to replicate the experiment.
Experiment Setup Yes For a batch of re ID data, we experimentally set λ1 to λ4 as 0.5, 1.5, 1, 1. For a batch of synthesized data, λ1 to λ4 are set to 0, 0, 1, 0 where the re ID losses and Triplet Re ID constraints (losses) are not used. The margin parameter m is set to 0.3 experimentally.