Learning Dense Correspondence for NeRF-Based Face Reenactment

Authors: Songlin Yang, Wei Wang, Yushi Lan, Xiangyu Fan, Bo Peng, Lei Yang, Jing Dong

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that we produce better results in finegrained motion control and identity preservation than previous methods.
Researcher Affiliation Collaboration 1School of Artificial Intelligence, University of Chinese Academy of Sciences, China 2CRIPAC & MAIS, Institute of Automation, Chinese Academy of Sciences, China 3S-Lab, Nanyang Technological University, Singapore 4Sense Time, China
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a direct link or explicit statement about releasing the source code for the described methodology.
Open Datasets Yes We conduct experiments over three commonly used datasets: Vox Celeb1 (Nagrani, Chung, and Zisserman 2017), Vox Celeb2 (Chung, Nagrani, and Zisserman 2018), and Talking Head-1KH (Wang, Mallya, and Liu 2021).
Dataset Splits No The paper does not provide specific percentages or counts for training, validation, and test dataset splits. It only mentions that 'The selected videos for the test are not overlapped with the training videos.'
Hardware Specification Yes Using Adam optimizer (set learning rate as 0.0001), the training takes about 4 days on 8 Tesla V100 GPUs while the fine-tuning takes 1 day.
Software Dependencies No The paper mentions software components like 'Res Net10', 'Style GAN-based generators', and 'face-parsing.Pytorch' but does not provide specific version numbers for these software dependencies (e.g., PyTorch version, specific StyleGAN library version).
Experiment Setup Yes The λ1, λ2, and λ3 are set as 1.0, 1.5, and 10. The iteration ratio of selfreenactment and cross-identity reenactment is better set at 2:1. Using Adam optimizer (set learning rate as 0.0001), the training takes about 4 days on 8 Tesla V100 GPUs while the fine-tuning takes 1 day.