Pose-Invariant Face Recognition via Adaptive Angular Distillation
Authors: Zhenduo Zhang, Yongru Chen, Wenming Yang, Guijin Wang, Qingmin Liao3390-3398
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two challenging benchmarks (IJB-A and CFP-FP) show that our approach consistently outperforms the existing methods. |
| Researcher Affiliation | Academia | Zhenduo Zhang, Yongru Chen, Wenming Yang*, Guijin Wang, Qingmin Liao Shenzhen International Graduate School/Department of Electronic Engineering, Tsinghua University, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We use the popular dataset MS-Celeb-1M (Guo et al. 2016) for training both teacher network and the student network. For evaluation, we adopt two benchmarks for pose-invariant face recognition: CFP-FP (Sengupta et al. 2016) and IJB-A (Klare et al. 2015) datasets with official evaluation protocols (Sengupta et al. 2016; Klare et al. 2015). |
| Dataset Splits | No | The paper mentions cleaning the MS-Celeb-1M dataset and using 'official evaluation protocols' for CFP-FP and IJB-A, but it does not specify explicit validation dataset splits (e.g., percentages, counts, or specific predefined splits) for its own training process to reproduce the data partitioning. |
| Hardware Specification | Yes | We use 4 Ge Force GTX 1080 GPUs for training and we select Res Net50, Res Net34 and Res Net18 as backbones due to the limitation of computation capacity. |
| Software Dependencies | No | The paper mentions data pre-processing and model architecture but does not provide specific software details like library names with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The initial learning rate is 0.001 and the default hyper-parameters of our method are λ1 = 0.5, λ2 = 0.5, µ1 = 0.01 and µ2 = 0.4. We set N = 20, C = 5 and M = 8. For all the models during inference stage, we extract the 512-D feature embeddings and use cosine distance as the metric. |