FAN-Face: a Simple Orthogonal Improvement to Deep Face Recognition
Authors: Jing Yang, Adrian Bulat, Georgios Tzimiropoulos12621-12628
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted extensive experiments illustrating how the proposed approach, when integrated with existing stateof-the-art methods, systematically improves face recognition accuracy for a wide variety of experimental settings. Our approach sets a new state-of-the-art on the challenging IJB-B (Whitelam et al. 2017), IJB-C (Maze et al. 2018) and Mega Face (Kemelmacher-Shlizerman et al. 2016) datasets. In this section, we evaluate the accuracy of interesting variants and training procedures of the proposed method. The experiments are done by training the models on a randomly selected subset of 1M images from VGGFace2 dataset and evaluating them on the IJB-B dataset. All results are shown in Table 1. |
| Researcher Affiliation | Collaboration | 1University of Nottingham, 2Samsung AI Center, Cambridge {jing.yang2, yorgos.tzimiropoulos}@nottingham.ac.uk, adrian@adrianbulat.com |
| Pseudocode | No | The paper describes the methods textually and with diagrams (e.g., Figure 1, 2, 3) but does not include explicit pseudocode blocks or algorithm listings. |
| Open Source Code | No | The paper does not provide a link to its own open-source code nor explicitly state that its code is being released. It mentions following and using publicly available code from other works (e.g., Deng et al. 2019) for implementation, but not releasing their own. |
| Open Datasets | Yes | Training datasets. We trained our models on 3 popular training datasets: for most of our experiments we used VGGFace2 (Cao et al. 2018) (an improved version of VGGFace (Parkhi et al. 2015))... Besides, we trained our model on MS1MV2 (Deng et al. 2019), a semi-automatically reļ¬ned version of MSCeleb-1M dataset (Guo et al. 2016)... We also trained our model on CASIA-Webface (Yi et al. 2014). |
| Dataset Splits | No | The paper describes the datasets used for training and testing (VGGFace2 for training, IJB-B, IJB-C, Mega Face, LFW, YTF for evaluation). It mentions that models were trained on a subset of VGGFace2 and evaluated on IJB-B, but it does not specify explicit training/validation splits (e.g., percentages or counts) for these datasets as part of the model development process. |
| Hardware Specification | No | The paper does not specify the hardware used for training or experiments (e.g., GPU models, CPU types, or memory). |
| Software Dependencies | No | The paper mentions "All models were implemented in Py Torch (Paszke et al. 2017)", but it does not specify the version number of PyTorch or any other software libraries or dependencies used, which is required for reproducibility. |
| Experiment Setup | Yes | Loss functions. To train our networks, we mostly used the Arc Face loss (Deng et al. 2019)... Other hyperparameters. We followed the publicly available code of (Deng et al. 2019) for implementing and training our models. For a fair comparison, we used the same Res Net as Arc Face. FRN and the integration layers were trained from scratch with SGD with a batch size of 512. The weight decay was set to 5e 4 and the momentum to 0.9. Face pre-processing. We followed standard practices in face recognition... to crop a face image of 112 112 (without using landmarks for alignment)... |