Towards Accurate Facial Motion Retargeting with Identity-Consistent and Expression-Exclusive Constraints
Authors: Langyuan Mo, Haokun Li, Chaoyang Zou, Yubing Zhang, Ming Yang, Yihong Yang, Mingkui Tan1981-1989
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on facial motion retargeting and 3D face reconstruction tasks demonstrate the superiority of the proposed method over existing methods. Our code and supplementary materials are available at https://github.com/deepmo24/CPEM. |
| Researcher Affiliation | Collaboration | Langyuan Mo1,2, Haokun Li1, Chaoyang Zou3, Yubing Zhang3, Ming Yang3, Yihong Yang4, Mingkui Tan1,5* 1 School of Software Engineering, South China University of Technology, 2 Pazhou Laboratory, 3 CVTE Research, 4 MINIEYE, 5 Key Laboratory of Big Data and Intelligent Robot (South China University of Technology), Ministry of Education |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and supplementary materials are available at https://github.com/deepmo24/CPEM. |
| Open Datasets | Yes | We train our model on three publicly available datasets: Vox Celeb2 (Joon Son et al. 2018), 300W-LP (Zhu et al. 2016) and FEAFA (Yan et al. 2019). |
| Dataset Splits | No | The paper mentions training and testing but does not explicitly detail validation splits or provide explicit split percentages for any dataset. |
| Hardware Specification | No | The paper mentions using ResNet50 as the backbone but does not specify any hardware details like GPU model, CPU type, or memory. |
| Software Dependencies | Yes | We implement our method based on Py Torch (Paszke et al. 2019) and use the differentiable renderer from Pytorch3d (Lassner and Zollhofer 2021). |
| Experiment Setup | Yes | We use an Adam optimizer (Kingma and Ba 2015) with a learning rate of 1e-4. We train our model for 300K iterations with a batch size of 8 and an input size of 224x224, and only use the expression-exclusive loss in the last 100K iterations. ... By default, we set T = 4, λidc = 1000, and λexp = 10. |