AnchorFace: An Anchor-based Facial Landmark Detector Across Large Poses

Authors: Zixuan Xu, Banghuai Li, Ye Yuan, Miao Geng3092-3100

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Overall, our proposed approach, named Anchor Face, obtains stateof-the-art results with extremely efficient inference speed on four challenging benchmarks, i.e. AFLW, 300W, Menpo, and WFLW dataset.
Researcher Affiliation Collaboration Zixuan Xu,1* Banghuai Li,2* Ye Yuan2 and Miao Geng3 1 Peking University 2 Megvii Research 3 Beihang University zixuanxu@pku.edu.cn, {libanghuai,yuanye}@megvii.com, geng m@buaa.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No Code will be available soon.
Open Datasets Yes The experiments are evaluated on four challenging datasets, i.e. AFLW (K ostinger et al. 2011), 300W (Sagonas et al. 2013), Menpo (Deng et al. 2019; Zafeiriou et al. 2017), and WFLW (Wu et al. 2018).
Dataset Splits No The paper mentions training on datasets and evaluating on test sets, but does not explicitly provide specific training/validation/test dataset splits (e.g., percentages or sample counts) needed to reproduce the experiment's data partitioning.
Hardware Specification Yes The computational speed of 45 FPS is calculated on one Nvidia Titan Xp GPU with batchsize 1.
Software Dependencies No The paper mentions using Adam optimizer and specific network architectures like Shuffle Net-V2 and HRNet-18, but does not provide specific version numbers for software dependencies like programming languages or libraries.
Experiment Setup Yes We apply the Adam optimizer with the weight decay of 1 10 5 and train the network for 50 epochs in total. The learning rate is set to 1 10 3 and divided by ten at 20-th, 30-th, 40-th epoch. β = 0.05 and λ = 0.5 are applied to all models across four benchmarks.