Progressive Transfer Learning for Person Re-identification

Authors: Zhengxu Yu, Zhongming Jin, Long Wei, Jishun Guo, Jianqiang Huang, Deng Cai, Xiaofei He, Xian-Sheng Hua

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical experiments show that our proposal can improve the performance of the Re ID model greatly on MSMT17, Market1501, CUHK03 and Duke MTMC-re ID datasets.
Researcher Affiliation Collaboration 1 State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China, 2 DAMO Academy, Alibaba Group, Hangzhou, China, 3 Fabu Inc., Hangzhou, China, 4 GAC R&D Center, Guangzhou, China
Pseudocode No The paper includes mathematical equations (Eq. 1, Eq. 2) but does not present them or any other procedural steps in a formal pseudocode or algorithm block.
Open Source Code No The code will be released later on at https://github. com/ZJULearning/PTL
Open Datasets Yes We selected four persuasive Re ID datasets to evaluate our proposal, including Market-1501, Duke MTMC-re ID, MSMT17 and CUHK03. We followed the same dataset split by Wei et al. [Wei et al., 2018], and we also used the evaluation code provided by them (https://github.com/Join Wei-PKU/ MSMT17 Evaluation). For all experiments on Market-1501, Duke MTMCre ID and CUHK03, we used the evaluation code provided in Open-Re ID (https://github.com/Cysu/open-reid).
Dataset Splits Yes We followed the same dataset split by Wei et al. [Wei et al., 2018] [MSMT17]. We followed the same dataset split as used in the [Wang et al., 2018] [CUHK03]. All validation, query and gallery set of these two datasets are abandoned [Market-Duke].
Hardware Specification No The paper mentions "GPU usage limitation" but does not specify any particular GPU models, CPU types, or other hardware components used for running the experiments.
Software Dependencies No The paper mentions using SGD-M as the optimizer and models like Dense Net-161 and Res Net-50, but it does not specify any software libraries, frameworks (e.g., PyTorch, TensorFlow), or their version numbers.
Experiment Setup Yes The initial learning rate is set to 0.01 and decay the learning rate ten times every ten epochs. Models are fine-tuned for 50 epochs. Unless otherwise stated, in all of our experiments, we use SGD-M as the optimizer. The hyper-parameter λ is set to 0.8 by practicing in the following experiments. Dense Net-161* used a batch size of 90, other experiments involving Dense Net-161 used a batch size of 32.