Adversarial Pose Regression Network for Pose-Invariant Face Recognitions
Authors: Pengyu Li, Biao Wang, Lei Zhang1940-1948
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments show that the proposed APRN consistently and significantly boosts the performance of baseline networks without extra computational costs in the inference phase. APRN achieves comparable or even superior to the state-of-the-art on CFP, Multi-PIE, IJB-A and Mega Face datasets. The code will be released1, hoping to nourish our proposals to other computer vision fields. |
| Researcher Affiliation | Collaboration | Pengyu Li1, Biao Wang1, Lei Zhang1,2 1Artificial Intelligence Center, DAMO Academy, Alibaba Group 2Department of Computing, The Hong Kong Polytechnic University lipengyu007@gmail.com, wangbiao225@foxmail.com, cslzhang@comp.polyu.edu.hk |
| Pseudocode | No | The paper does not contain pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The code will be released1, hoping to nourish our proposals to other computer vision fields. 1https://github.com/pengyu LPY/Adversarial-Pose-Regression Network-for-Pose-Invariant-Face-Recognitions |
| Open Datasets | Yes | In this paper, MS-Celeb-1M (MS-1M) (Guo et al. 2016) and CASIA-Web Face(CASIA) (Yi et al. 2014) are used as training datasets respectively in different experiments. |
| Dataset Splits | Yes | The CASIA-Web Face consists of 494,414 near-frontal faces of 10,575 subjects from the internet. Multi-PIE (Gross et al. 2010) dataset consists of 754, 200 images of 337 subjects. ... The first 200 subjects are used for training. The rest 137 subjects are used for testing. CFP (Sengupta et al. 2016), LFW (Huang et al. 2008), IJB-A (Klare et al. 2015) and Mega Face (Kemelmacher Shlizerman et al. 2016) are used as evaluation datasets. The LFW consists of 13,323 web photos of 5,749 celebrities which are divided into 6,000 face pairs in 10 splits. In this paper, we follow the standard protocols of LFW and CFP and report their mean accuracy and the standard error of the mean. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. It only mentions 'implemented on the publicly available Py Torch platform'. |
| Software Dependencies | No | The paper mentions 'implemented on the publicly available Py Torch platform (Paszke et al. 2017)' but does not specify the version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The network is trained for 30 epochs. The learning rate of Baseline models is 0.1 and decays 10 times at the 20th, 27th and 29th epoch. Momentum is 0.9, weight decay is 0.0005, and α in Equation 1 is 0.2. ... The networks are optimized with SGD for 20 epochs. The learning rate of the Baseline models is 0.01 and decays ten times at the 16th, 18th and 19th epoch. Momentum is 0.9, weight decay is 0.0005, and α is 0.2. |