JR2Net: Joint Monocular 3D Face Reconstruction and Reenactment
Authors: Jiaxiang Shang, Yu Zeng, Xin Qiao, Xin Wang, Runze Zhang, Guangyuan Sun, Vishal Patel, Hongbo Fu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our JR2Net outperforms the state-of-the-art methods on several face reconstruction and reenactment benchmarks. |
| Researcher Affiliation | Collaboration | 1Hong Kong University of Science and Technology, 2Johns Hopkins University, 3Tencent, 4City University of Hong Kong |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code, such as a repository link or an explicit statement of code release. |
| Open Datasets | Yes | To train our Rec Net, we combine multiple datasets, including 300W-LP (Zhu et al. 2016), Celeb A (Liu et al. 2015), LS3D (Bulat and Tzimiropoulos 2017), and Voxelceleb2 (Chung, Nagrani, and Zisserman 2018), which provide diversified illumination and background for training. |
| Dataset Splits | No | The paper mentions training and testing data but does not provide specific details about validation splits (e.g., percentages, sample counts, or explicit references to predefined validation sets). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | No | The paper describes the methodology but does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings. |