Divide and Conquer: Hybrid Pre-training for Person Search

Authors: Yanling Tian, Di Chen, Yunan Liu, Jian Yang, Shanshan Zhang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our pre-trained model can achieve significant improvements across diverse protocols, such as person search method, finetuning data, pre-training data and model backbone. For example, our model improves ResNet50 based NAE by 10.3% relative improvement w.r.t. mAP.
Researcher Affiliation Academia 1PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China 2 School of Artificial Intelligence, Dalian Maritime University {yl.tian, dichen, liuyunan, shanshan.zhang, csjyang}@njust.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code and pre-trained models are released for plug-and-play usage to the person search community (https://github.com/personsearch/PretrainPS).
Open Datasets Yes We use two relatively large person detection datasets, i.e. Crowd Human (Shao et al. 2018) and Euro City Persons (ECP) (Braun et al. 2019) datasets. In addition, we use two relatively common re-ID datasets including MSMT17 (Wei et al. 2018) and CUHK03 (Li et al. 2014), and one unlabeled large re-ID dataset, LUPerson (Fu et al. 2021) for pre-training. Person search datasets. The two most commonly used datasets for person search are PRW (Zheng et al. 2017) and CUHK-SYSU (Xiao et al. 2017). In addition to these two standard datasets, there is a new dataset, PoseTrack21 (Doering et al. 2022), that can be used for person search.
Dataset Splits No The paper mentions training and testing sets for CUHK-SYSU and PRW datasets, but does not explicitly state the use or size of a separate validation set for hyperparameter tuning or early stopping.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper states “Implementation details are provided in our code.” but does not explicitly list specific software dependencies with version numbers within the main text.
Experiment Setup No The paper mentions “Implementation details are provided in our code.” and refers to hyper-parameters in the loss function (λ and η) but does not provide their specific values or other detailed training configurations like learning rate, batch size, or optimizer settings within the main text.