Self-Reinforced Cascaded Regression for Face Alignment

Authors: Xin Fan, Risheng Liu, Kang Huyan, Yuyao Feng, Zhongxuan Luo

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiments were performed on six widely used datasets include FRGC v2.0, LFPW, HELEN, AFW, i BUG and 300W. All faces are labeled 68 landmarks. We compute the alignment error for testing images using the standard mean error normalized by the inter-pupil distance (NME).
Researcher Affiliation Academia 1DUT-RU International School of Information Science & Engineering, Dalian University of Technology, Dalian, China 2Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, Dalian, China 3School of Mathematical Science, Dalian University of Technology, Dalian, China {xin.fan, rsliu, zxluo}@dlut.edu.cn, huyankang@hotmail.com yyaofeng@gmail.com
Pseudocode No The paper includes mathematical formulations and figures but no explicitly labeled "Pseudocode" or "Algorithm" blocks with structured steps.
Open Source Code No The paper does not include an explicit statement about releasing source code or provide any links to a code repository.
Open Datasets Yes The experiments were performed on six widely used datasets include FRGC v2.0, LFPW, HELEN, AFW, i BUG and 300W. and The 300W set consisting of the test sets of LFPW and Helen (Le et al. 2012).
Dataset Splits Yes We started from 100 labeled examples, and implemented the self-reinforced version of LBF (SR-LBF) to automatically include 711 extra samples (regarded as unlabeled). and In contrast, our self-reinforced LBF (SR-LBF) starts from only a half of LFPW, i.e. 400 training labels, and the other half are included by our self-reinforced strategy.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions various algorithms and feature descriptors but does not list any specific software dependencies (e.g., libraries, frameworks) along with their version numbers.
Experiment Setup No The paper mentions some parameters like μ and λ but does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs, optimizer settings) or detailed training configurations for the experimental setup.