Surpassing Human-Level Face Verification Performance on LFW with GaussianFace

Authors: Chaochao Lu, Xiaoou Tang

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrated the effectiveness of the proposed model in learning from diverse data sources and generalizing to unseen domains. Specifically, the accuracy of our algorithm achieved an impressive accuracy rate of 98.52% on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark.
Researcher Affiliation Academia Chaochao Lu Xiaoou Tang Department of Information Engineering The Chinese University of Hong Kong {lc013, xtang}@ie.cuhk.edu.hk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about concrete access to source code for the methodology described.
Open Datasets Yes LFW (Huang et al. 2007). This dataset contains 13,233 uncontrolled face images of 5749 public figures with variety of pose, lighting, expression, race, ethnicity, age, gender, clothing, hairstyles, and other parameters. Multi-PIE (Gross et al. 2010). MORPH (Ricanek and Tesafaye 2006).
Dataset Splits Yes More precisely, during the training procedure, the four source-domain datasets are: Web Images, Multi-PIE, MORPH, and Life Photos, the target-domain dataset is the training set in View 1 of LFW, and the validation set is the test set in View 1 of LFW. At the test time, we follow the standard 10-fold cross-validation protocol to test our model in View 2 of LFW.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models or processor types used for running its experiments. It only vaguely mentions "large memory" and potential "GPU implementation".
Software Dependencies No The paper does not provide specific software dependencies with version numbers for its implementation. It mentions techniques and references other libraries without versions.
Experiment Setup Yes Our model involves four important parameters: λ in (10), σ in (11), β in (17), and the number of anchors q 3. Following the same setting in (Kim, Magnani, and Boyd 2006), the regularization parameter λ in (10) is fixed to 10 8. σ reflects the tradeoff between our method s ability to discriminate (small σ) and its ability to generalize (large σ), and β balances the relative importance between the target-domain data and the multi-task learning constraint. Therefore, the validation set (the test set in View 1 of LFW) is used for selecting σ and β. Each time we use different number of source-domain datasets for training, the corresponding optimal σ and β should be selected on the validation set.