Scene-Adaptive Person Search via Bilateral Modulations
Authors: Yimin Jiang, Huibing Wang, Jinjia Peng, Xianping Fu, Yang Wang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | SEAS can achieve state-of-the-art (SOTA) performance on two benchmark datasets, CUHK-SYSU with 97.1% m AP and PRW with 60.5% m AP. |
| Researcher Affiliation | Academia | 1School of Information Science and Technology, Dalian Maritime University, Dalian, China 2School of Cyber Security and Computer, Hebei University, Baoding, China 3Key Laboratory of Knowledge Engineering with Big Data, Ministry of Education, Hefei University of Technology, Hefei, China |
| Pseudocode | No | The paper provides detailed descriptions of its networks (BMN, FMN) and processes using text and architectural diagrams (e.g., Figures 3, 4, 5), but it does not include any explicit pseudocode blocks or sections labeled 'Algorithm'. |
| Open Source Code | Yes | The code is available at https://github.com/whbdmu/SEAS. |
| Open Datasets | Yes | CUHK-SYSU [Xiao et al., 2017] is a large-scale person search dataset... PRW [Zheng et al., 2017] is composed of video frames captured by six fixed cameras on a university campus. |
| Dataset Splits | No | The dataset is split into a training set with 5,532 identities and 11,206 images, and a test set with 2,900 query persons and 6,978 gallery images. ... The training set consists of 932 identities with 5,704 images, and the test set contains 2,057 query persons and 6,112 scene images. The paper explicitly details the training and test sets but does not mention a separate validation set or how one was derived. |
| Hardware Specification | Yes | We conduct all experiments on a NVIDIA A800 GPU and implement our model with Py Torch [Paszke et al., 2019]. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Conv Ne Xt' as software used but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | During training, we set the batch size to 5 for CUHK-SYSU and 8 for PRW using Automatic Mixed Precision (AMP). Adam is used to optimize our model for 20 epochs with an initial learning rate of 0.0001, which is warmed up during the first epoch and reduced by a factor of 10 at epochs 8 and 14. For Equation 5, we initialize the hyper-parameter M as 0.25 for CUHK-SYSU and 0.35 for PRW. For MGE, the probability of the Drop Path is set to 0.1. |