Coupled Deep Learning for Heterogeneous Face Recognition
Authors: Xiang Wu, Lingxiao Song, Ran He, Tieniu Tan
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that CDL achieves better performance on the challenging CASIA NIR-VIS 2.0 face recognition database, the IIIT-D Sketch database, the CUHK Face Sketch (CUFS), and the CUHK Face Sketch FERET (CUFSF), which significantly outperforms state-of-the-art heterogeneous face recognition methods. |
| Researcher Affiliation | Academia | Xiang Wu, Lingxiao Song, Ran He, Tieniu Tan Center for Research on Intelligent Perception and Computing (CRIPAC), National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences, Beijing, P. R. China, 100190 |
| Pseudocode | Yes | Algorithm 1 Coupled Deep Learning (CDL) Training. |
| Open Source Code | No | The paper does not provide a statement about the open-source availability of the described methodology's code, nor does it provide a direct link to a code repository for CDL. |
| Open Datasets | Yes | The CASIA NIR-VIS 2.0 face database is widely used to evaluate heterogeneous face recognition algorithms. The IIIT-D Sketch database, the CUHK Face Sketch (CUFS), and the CUHK Face Sketch FERET (CUFSF) are all the viewed sketch-photo face database. First, we train the basic light CNN on the MS-Celeb-1M dataset. |
| Dataset Splits | Yes | The CASIA NIR-VIS 2.0 face database... View 1 is used for super-parameters adjustment, and View 2 is used for training and testing. For a fair comparison with other methods, we choose the standard protocol in View 2. There are 10-fold experiments in View 2. |
| Hardware Specification | Yes | The model is trained on TITAN X for two weeks and the performance on LFW obtains 98.80%. |
| Software Dependencies | No | The paper mentions software components like 'light CNN', 'softmax', 'convolution neural network', 'stochastic gradient descent', and 'back-propagation' but does not specify their version numbers. |
| Experiment Setup | Yes | The momentum is set to 0.9 and the weight decay is set to 5e-4. Moreover, the drop ratio for the fully connected layer is set to 0.7. The learning rate is set to 1e-3 initially and reduced to 5e-5 gradually and we initialize the convolution parameters by Xavier and the fully-connected layers by Gaussian, respectively. The batch size is set to 128 and the learning rate is decreased from 1e-4 to 1e-6 gradually for around 200,000 iterations. The trade-off parameter λ1 for softmax term is set to 1 and λ2 for cross modal ranking term is increased from 0 to 1 gradually. Moreover, the constant λ for relevance constraint in softmax is set to 0.001. |