Relightable and Animatable Neural Avatars from Videos

Authors: Wenbin Lin, Chengwei Zheng, Jun-Hai Yong, Feng Xu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on synthetic and real datasets show that our approach reconstructs high-quality geometry and generates realistic shadows under different body poses.
Researcher Affiliation Academia School of Software and BNRist, Tsinghua University lwb20@mails.tsinghua.edu.cn, zhengcw18@gmail.com, yongjh@tsinghua.edu.cn, xufeng2003@gmail.com
Pseudocode No The paper describes its method using text and mathematical equations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Code and data are available at https://wenbin-lin.github.io/ Relightable Avatar-page/.
Open Datasets Yes For the real dataset, we use multi-view dynamic human datasets including the ZJU-Mo Cap (Peng et al. 2021b), Human3.6M (Ionescu et al. 2014), Deep Cap (Habermann et al. 2020) and People Snapshot (Alldieck et al. 2018) dataset.
Dataset Splits No The paper refers to using datasets for training and evaluation but does not specify explicit train/validation/test splits (e.g., percentages or counts).
Hardware Specification Yes The network training takes about 2.5 days in total on a single RTX 3090 GPU
Software Dependencies No The paper mentions tools like Mixamo, Blender, and Poly Haven for dataset creation, but does not list specific versions of software dependencies (e.g., libraries, frameworks) required to run the implemented code.
Experiment Setup No For more details about network architecture and training, please refer to the supplemental document.