Detailed Facial Geometry Recovery from Multi-View Images by Learning an Implicit Function

Authors: Yunze Xiao, Hao Zhu, Haotian Yang, Zhengyu Diao, Xiangju Lu, Xun Cao2839-2847

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We use Face Scape(Yang et al. 2020; Zhu et al. 2021a) dataset to train and validate our method. The scores of CD-mean, CD-rms, and completeness are reported in Table 1. The rendered results, as well as heat maps of error distance from predicted mesh to ground-truth mesh, are shown in Figure 4, and these results of previous methods are from the fine-tuned models. From the quantitative comparison in Table 1, we can see that our method outperforms previous methods in CD-mean, CD-rms, and completeness for facial reconstruction.
Researcher Affiliation Collaboration Yunze Xiao1 , Hao Zhu1 , Haotian Yang1, Zhengyu Diao1, Xiangju Lu2, Xun Cao1 1 Nanjing University, Nanjing, China 2 i QIYI Inc, Beijing, China
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code and data are released in https://github.com/zhuhao-nju/mvfr.
Open Datasets Yes We use Face Scape(Yang et al. 2020; Zhu et al. 2021a) dataset to train and validate our method. The Face Scape dataset contains 7120 multi-view images and corresponding 3D models...
Dataset Splits No We use Face Scape(Yang et al. 2020; Zhu et al. 2021a) dataset to train and validate our method. ...selected 80% of the remaining data as the training set and the other 20% as the testing set. While validation is mentioned, a specific percentage or count for a distinct validation split (separate from the 80/20 train/test split) is not provided.
Hardware Specification Yes We trained the model using Nvidia RTX 3090 for about 100 hours.
Software Dependencies No The paper mentions several components and baselines (e.g., Adam optimizer, group normalization, MVSNet, Pix2Pix HD) but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes MSE loss is used to train the feature extractor + implicit function, and also the post-regularizer. We train our network using Adam optimizer, with the learning rate as 10^-3, and our model is trained in 200 epochs. The batch size is set to 1.