Asymmetric Joint Learning for Heterogeneous Face Recognition

Authors: Bing Cao, Nannan Wang, Xinbo Gao, Jie Li

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on viewed sketch database, forensic sketch database and near infrared image database illustrate that the proposed AJL-HFR method achieve superior performance in comparison to state-of-the-art methods.
Researcher Affiliation Academia 1State Key Laboratory of Integrated Services Networks, School of Electronic Engineering, Xidian University, Xi an 710071, China 2State Key Laboratory of Integrated Services Networks, School of Telecommunications, Xidian University, Xi an 710071, China
Pseudocode Yes Algorithm 1 AJL-HFR Input: Training set A, probe image p, gallery dataset G. Step 1: Generate synthesized image pairs corresponding to training set A by three face sketch synthesis methods: RSLCR, MWF, GANs. Let B represents the set of the synthesized image pairs and the original training image pairs. Step 2: Initialize the inter-class covariance matrix Sμot from image pairs of training set A and the intra-class covariance matrix Sεot,st from image pairs of dataset B. Step 3: EM strategy is applied to jointly optimize Sμot and Sεot,st. Then calculate M and N according to equation (9) and (10) respectively. Step 4: Calculate the similarity of probe image p and each image in gallery dataset G. Sort the similarities by descend order. Output: The target heterogeneous face image t in gallery dataset G.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes We evaluate our methods on four databases: CUHK Face Sketch FERET (CUFSF) database (Zhang, Wang, and Tang 2011), IIIT-D Sketch database (Bhatt et al. 2012), Forensic Sketch database (Peng et al. 2017) and CUHK VIS-NIR database (Gong et al. 2017).
Dataset Splits No The paper describes training and testing splits for various datasets (e.g., '500 subjects are randomly selected as the training set. The remaining 694 subjects are used for test.' for CUFSF; 'randomly divide the database into two halves without overlapping, one half for training and the other half for testing.' for CUHK VIS-NIR), but does not explicitly mention a separate 'validation' set or its specific size/percentage.
Hardware Specification Yes All experiments are conducted on Windows 7 operation system with i7-4790 3.6G CPU, under the environment of MATLAB R2016b software.
Software Dependencies Yes All experiments are conducted on Windows 7 operation system with i7-4790 3.6G CPU, under the environment of MATLAB R2016b software.
Experiment Setup Yes All the images used in this paper are aligned according to the eye centers. And the size of each image is cropped to 250 200. Each image patch is size of 10 10, and we keep 50% overlap between adjacent patches. ... 750 is the best dimensionality for our framework.