Selective Refinement Network for High Performance Face Detection
Authors: Cheng Chi, Shifeng Zhang, Junliang Xing, Zhen Lei, Stan Z. Li, Xudong Zou8231-8238
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have been conducted on AFW, PASCAL face, FDDB, and WIDER FACE benchmarks and we set a new state-of-the-art performance. Experiments We first analyze the proposed method in detail to verify the effectiveness of our contributions. Then we evaluate the final model on the common face detection benchmark datasets, including AFW (Zhu and Ramanan 2012), PASCAL Face (Yan et al. 2014b), FDDB (Jain and Learned-Miller 2010), and WIDER FACE (Yang et al. 2016). |
| Researcher Affiliation | Academia | 1Institute of Electronics, Chinese Academy of Sciences, Beijing, China 2CBSR & NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China 3University of Chinese Academy of Sciences, Beijing, China |
| Pseudocode | No | The paper does not contain any sections or figures labeled 'Pseudocode' or 'Algorithm', nor does it present structured steps formatted like code. |
| Open Source Code | No | Codes will be released to facilitate further studies on the face detection problem. |
| Open Datasets | Yes | All the models are trained on the training set of the WIDER FACE dataset (Yang et al. 2016). It consists of 393, 703 annotated face bounding boxes in 32, 203 images with variations in pose, scale, facial expression, occlusion, and lighting condition. |
| Dataset Splits | Yes | The dataset is split into the training (40%), validation (10%) and testing (50%) sets, and defines three levels of difficulty: Easy, Medium, Hard, based on the detection rate of Edge Box (Zitnick and Doll ar 2014). |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or cloud instance specifications used for experiments. |
| Software Dependencies | No | We implement SRN using the Py Torch library (Paszke et al. 2017). |
| Experiment Setup | Yes | We fine-tune the SRN model using SGD with 0.9 momentum, 0.0001 weight decay, and batch size 32. We set the learning rate to 10^-2 for the first 100 epochs, and decay it to 10^-3 and 10^-4 for another 20 and 10 epochs, respectively. |