Doubly Hierarchical Geometric Representations for Strand-based Human Hairstyle Generation

Authors: Yunlu Chen, Francisco Vicente Carrasco, Christian Häne, Giljoo Nam, Jean-Charles Bazin, Fernando D De la Torre

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluations confirm the capacity of the model to generate convincing guide hair and dense strands, complete with nuanced high-frequency details.
Researcher Affiliation Collaboration Carnegie Mellon University Meta Reality Labs
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No Release of the code will be subject to assessment by the internal legal department. Data will not be released due to the license issues.
Open Datasets Yes We use 343 synthetic hairstyles that originate from mesh hair cards [15], thus consisting of smooth strands with straight and wavy structures in low frequencies without high-frequency details. In addition, we collected and crafted 26 base particle hair projects using Blender [8] in the form of genuine 3D strands.
Dataset Splits No The paper mentions using "30 test examples" but does not explicitly state how the dataset was split into training, validation, and test sets, nor does it provide specific percentages or counts for these splits. It only details sampling for training batches.
Hardware Specification Yes All experiments are conducted on a single Nvidia RTX A4500 GPU.
Software Dependencies No Our method is implemented with PyTorch [25]. ... We use Adam W [20] ... We use the visualization tool provided by the code repository in [36] for our work. We use the code from [34] to extract k-medoids. The paper mentions software like PyTorch, AdamW, and references other codebases, but it does not specify version numbers for PyTorch or any other key software dependencies used in their implementation.
Experiment Setup Yes We use Adam W [20] with a learning rate of 3 10 4 for 100k iterations for both low-frequency and high-frequency models, with an exponential learning rate decay to 3 10 6 at the end of training. The batch size is 32.