FDN: Feature Decoupling Network for Head Pose Estimation

Authors: Hao Zhang, Mengmeng Wang, Yong Liu, Yi Yuan12789-12796

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both in-the-wild and controlled environment datasets demonstrate that the proposed method outperforms other state-of-the-art methods based on a single RGB image and behaves on par with approaches based on multimodal input resources.
Researcher Affiliation Collaboration Hao Zhang,1 Mengmeng Wang,1 Yong Liu,1 Yi Yuan2 1Institute of Cyber-Systems and Control, Zhejiang University, China 2Net Ease Fuxi AI Lab
Pseudocode Yes Algorithm 1 Feature Decoupling Network (FDN) Training
Open Source Code No The paper does not include an explicit statement about the availability of open-source code or a link to a code repository for the described methodology.
Open Datasets Yes The 300W-LP dataset (Zhu et al. 2016) is a large synthetic dataset... The AFLW2000 dataset (Zhu et al. 2016) contains the ground truth 3D faces... The BIWI dataset (Fanelli et al. 2013) provides pose annotations...
Dataset Splits Yes In protocol 2, we split videos in the BIWI dataset in a ratio of 7:3 for training and testing respectively following (Yang et al. 2019).
Hardware Specification No The paper states 'All experiments are carried out based on Pytorch.', but does not provide specific details on the hardware (e.g., GPU/CPU models, memory) used for the experiments.
Software Dependencies No The paper mentions 'All experiments are carried out based on Pytorch.' but does not specify the version number for Pytorch or any other software dependencies.
Experiment Setup Yes The trade-off parameters λ, α are set to 2.5 and 0.01 respectively in all experiments. SGD optimizer is used to update centers with the learning rate 5 × 10−4 and Adam optimizer is used to update the network parameters with the learning rate 1 × 10−4. Batch size is set to 16, and the network is trained for 100 epochs in total. All the images are cropped around the face to include the whole head. After being randomly cropped to 224 × 224, the images are normalized by ImageNet mean and standard deviation.