MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

Authors: Shaofei Wang, Marko Mihajlovic, Qianli Ma, Andreas Geiger, Siyu Tang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We qualitatively and quantitatively show that our approach outperforms state-of-the-art approaches that require complete meshes as inputs while our approach requires only depth frames as inputs and runs orders of magnitudes faster. Furthermore, we demonstrate that our meta-learned hypernetwork is very robust, being the first to generate avatars with realistic dynamic cloth deformations given as few as 8 monocular depth frames.
Researcher Affiliation Academia Shaofei Wang1 shaofei.wang@inf.ethz.ch Marko Mihajlovic1 marko.mihajlovic@inf.ethz.ch Qianli Ma1,2 qianli.ma@tue.mpg.de Andreas Geiger2,3 a.geiger@uni-tuebingen.de Siyu Tang1 siyu.tang@inf.ethz.ch 1ETH Zürich 2Max Planck Institute for Intelligent Systems, Tübingen 3University of Tübingen
Pseudocode Yes Algorithm 1 Meta-learning SDF with Reptile [49]
Open Source Code Yes Code and data are public at https://neuralbodies.github.io/metavatar/.
Open Datasets Yes We use the CAPE dataset [43] as the major test bed for our experiments. ... Code and data are public at https://neuralbodies.github.io/metavatar/.
Dataset Splits Yes The fine-tuning set is used for fine-tuning the Meta Avatar models, it is also used to evaluate pose interpolation task. The validation set is used for evaluating novel pose extrapolation. ... We use four unseen subjects (00122, 00134, 00215, 03375)2 for fine-tuning and validation; for each of these four subjects, the corresponding action sequences are split into finetuning set and validation set. ... we randomly split actions of these two subjects with 70% fine-tuning and 30% validation.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, or memory) used for running the experiments or training the models.
Software Dependencies No The paper mentions deep learning frameworks and libraries implicitly (e.g., neural implicit representations), but it does not specify any software names with version numbers for reproducibility.
Experiment Setup Yes for each subject/cloth-type combination we fine-tune the model for 256 epochs to produce subject/cloth-type specific models... SGD is used with the mini-batch size of 12 for the inner-loop.