Residual Compensation Networks for Heterogeneous Face Recognition
Authors: Zhongying Deng, Xiaojiang Peng, Yu Qiao8239-8246
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on IIIT-D Viewed Sketch, Forensic Sketch, CASIA NIR-VIS 2.0 and CUHK NIR-VIS show that our RCN outperforms other state-of-the-art methods significantly. [...] Our RCN achieves the state-of-the-art performance on four popular HFR datasets, namely 90.34% on IIIT-D Viewed Sketch, 62.26% on Forensic Sketch, 99.32% on CASIA NIR-VIS 2.0 and 99.44% on CUHK NIR-VIS. |
| Researcher Affiliation | Academia | Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences Shenzhen, Guangdong Province, China 518055 {zy.deng1, xj.peng, yu.qiao}@siat.ac.cn |
| Pseudocode | No | The paper describes its architecture and methods through text and diagrams, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | Yes | For IIIT-D Viewed Sketch, we take the same training and testing protocols as (Wu et al. 2017) where training set is with the 1,194 image pairs from CUFSF (Zhang, Wang, and Tang 2011)... As the standard evaluation protocols in (Li et al. 2013), we tune parameters on View 1 and report the rank-1 face identification accuracy and verification rate (VR)@false acceptance rate (FAR) on View 2. For CUHK VIS-NIR face dataset... Following (Li et al. 2016), we use 1,438 infrared and visible image pairs as the training set and the remaining 1,438 pairs as the testing set. [...] We pre-train the backbone Res Net-10 on several webcollected data, including CASIA-Web Face (Yi et al. 2014), CACD2000 (Chen, Chen, and Hsu 2015), Celebrity+ (Liu et al. 2015), MSRA-CFW (Zhang et al. 2012), cleaned version of MS-Celeb-1M (Guo et al. 2016) provided by (Wu et al. 2015). |
| Dataset Splits | Yes | As the standard evaluation protocols in (Li et al. 2013), we tune parameters on View 1 and report the rank-1 face identification accuracy and verification rate (VR)@false acceptance rate (FAR) on View 2. |
| Hardware Specification | No | The paper only states that 'All experiments are carried out based on the Caffe (Jia et al. 2014)' but provides no specific details about the hardware (e.g., GPU, CPU models) used for the experiments. |
| Software Dependencies | No | The paper mentions using 'Caffe (Jia et al. 2014)' but does not specify its version number or any other software dependencies with their respective versions. |
| Experiment Setup | Yes | We set the batch size to 128, i.e. 64 image pairs and initial learning rate to 0.01. To alleviate over-fitting, we freeze all convolutional layers of the pre-trained CNN and only train the FC layers and RC module. We evaluate the hyper parameter λ in Eq. (6)... |