RePFormer: Refinement Pyramid Transformer for Robust Facial Landmark Detection

Authors: Jinpeng Li, Haibo Jin, Shengcai Liao, Ling Shao, Pheng-Ann Heng

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on four facial landmark detection benchmarks and their various subsets demonstrate the superior performance and high robustness of our framework.
Researcher Affiliation Collaboration Jinpeng Li1 , Haibo Jin2 , Shengcai Liao3 , Ling Shao4 and Pheng-Ann Heng1,5 1Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China 2Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China 3Inception Institute of Artificial Intelligence (IIAI), UAE 4Terminus Group, China 5Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link regarding the public availability of its source code.
Open Datasets Yes We conduct experiments on four popular facial landmark detection datasets including WFLW [Wu et al., 2018], 300W [Sagonas et al., 2013], AFLW-Full [Koestinger et al., 2011], and COFW [Burgos-Artizzu et al., 2013].
Dataset Splits No The paper mentions training parameters but does not explicitly provide specific training/validation/test dataset splits (e.g., percentages or counts) or refer to standard splits with enough detail for reproduction, other than listing the datasets used.
Hardware Specification No The paper does not specify any hardware details (e.g., specific GPU models, CPU types, or cloud computing instances) used for running the experiments.
Software Dependencies No The paper mentions the Adam optimizer and L1 loss, but it does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes The Adam optimizer without weight decay is used to train our models, β1 and β2 are set to 0.9 and 0.999, respectively. All models are trained for 360 epochs with a batch size of 16. The initial learning rate is 0.0001, which is decayed by a factor of 10 after 200 epochs. The temperature τ is set to 1000. The L1 loss is used as the loss function for all outputs, and the loss weights are simply set to 1. All input images are resized to 256x256.