DreamHuman: Animatable 3D Avatars from Text
Authors: Nikos Kolotouros, Thiemo Alldieck, Andrei Zanfir, Eduard Bazavan, Mihai Fieraru, Cristian Sminchisescu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we illustrate the effectiveness of our proposed method. We show how the individual proposed components help, and how we compare to recent state-of-the-art methods. We demonstrate that our method is capable to generate a wide variety of animatable, realistic 3D human models from text. Our 3D models have diverse appearance, clothing, skin tones and body shapes, and significantly outperform both generic text-to-3D approaches and previous text-based 3D avatar generators in visual fidelity. |
| Researcher Affiliation | Industry | Nikos Kolotouros Thiemo Alldieck Andrei Zanfir Eduard Gabriel Bazavan Mihai Fieraru Cristian Sminchisescu Google Research {kolotouros,alldieck,andreiz,egbazavan,fieraru,sminchisescu}@google.com |
| Pseudocode | No | The paper describes the methodology in text and mathematical formulas but does not provide structured pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not provide a direct link to the source code for the described methodology or explicitly state its release. |
| Open Datasets | Yes | At each optimization step, we sample a random pose from a distribution [68] trained on 3D motion capture [1, 21, 12, 14, 13] |
| Dataset Splits | No | The paper mentions using motion capture data for pose sampling and text prompts for evaluation (160 for CLIP, 20 for user study), but it does not specify explicit training, validation, or test dataset splits for the main model generation process. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU or CPU models, memory specifications) used to run the experiments. |
| Software Dependencies | No | The paper does not specify the version numbers for any software dependencies (e.g., Python, PyTorch, CUDA versions) used in the experiments. |
| Experiment Setup | No | The paper describes some general aspects of the optimization process, such as sampling random poses and camera positions, but it does not provide specific experimental setup details like hyperparameter values (e.g., learning rate, batch size, epochs) or detailed training configurations. |