Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Sequential End-to-end Network for Efficient Person Search
Authors: Zhengjia Li, Duoqian Miao2011-2019
AAAI 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two widely used person search benchmarks, CUHK-SYSU and PRW, have shown that our method achieves state-of-the-art results. |
| Researcher Affiliation | Academia | Zhengjia Li1,2, Duoqian Miao1,2* 1Department of Computer Science and Technology, Tongji University, Shanghai 201804, China 2Key Laboratory of Embedded System and Service Computing, Ministry of Education, Shanghai 201804, China |
| Pseudocode | Yes | Algorithm 1: CBGM |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the source code or a direct link to a code repository. |
| Open Datasets | Yes | CUHK-SYSU (Xiao et al. 2017) is a large scale person search dataset... PRW is another widely used dataset (Zheng et al. 2017) |
| Dataset Splits | No | The paper specifies training and test sets but does not explicitly mention or detail a separate validation set split (e.g., by size or method of creation). |
| Hardware Specification | Yes | We implement our model with Py Torch (Paszke et al. 2017) and run all experiments on one NVIDIA Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions implementing the model with 'Py Torch (Paszke et al. 2017)', but it does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | During training, batch size is 5 and each image is resized to 900 Γ 1500 pixels. Our model is optimized by Stochastic Gradient Descent (SGD) for 20 epochs (18 epochs for PRW) with initial learning rate of 0.003 which is warmed up during the ο¬rst epoch and decreased by 10 at the 16-th epoch. The momentum and weight decay of SGD are set to 0.9 and 5 Γ 10β4 individually. For CUHK-SYSU/PRW, the circular queue size of OIM is set to 5000/500. At test time, NMS with 0.4/0.5 threshold is used to remove redundant boxes detected by the ο¬rst/second head. |