ULNeF: Untangled Layered Neural Fields for Mix-and-Match Virtual Try-On

Authors: Igor Santesteban, Miguel Otaduy, Nils Thuerey, Dan Casas

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Table 1 we present an ablation study of the different terms and encodings used to train the implicit representation for open surfaces described in Section 3.1. For each ablation, we show the error of the two fields used in our representation. Table 2 evaluates the runtime performance of our approach. Specifically, we compare the evaluation time of the untangling operator of Buffet et al., [8] (i.e., solving Equation 6) vs. a forward pass of our learned projection operator. In Figure 4 we present a qualitative ablation of study of the different terms used to learn our implicit garment model described in in Section 3.1.
Researcher Affiliation Academia Igor Santesteban Universidad Rey Juan Carlos Madrid, Spain igor.santesteban@urjc.es Miguel A. Otaduy Universidad Rey Juan Carlos Madrid, Spain miguel.otaduy@urjc.es Nils Thuerey Technical University of Munich Germany nils.thuerey@tum.de Dan Casas Universidad Rey Juan Carlos Madrid, Spain dan.casas@urjc.es
Pseudocode No The paper does not include pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes To this end, the common solution is to use pervertex supervised strategies that leverage large datasets of simulated [22, 57, 50, 4] or reconstructed 3D garments [37, 52]. We first preprocess a dataset of garments by simulating each of them in a variety of human shapes.
Dataset Splits No In the supplementary document we provide additional details about the architecture of the neural network, training hyperparameters, and our strategy to sample β and x. Please check the supplementary document for details about training data, architecture and parameters.
Hardware Specification Yes This comparison was conducted in a regular desktop PC equipped with an AMD Ryzen 7 2700 CPU, an Nvidia GTX 1080 Ti GPU, and 32GB of RAM.
Software Dependencies No The paper mentions "machine learning frameworks" but does not specify any software dependencies with version numbers.
Experiment Setup No In the supplementary document we provide additional details about the architecture of the neural network, training hyperparameters, and our strategy to sample β and x. Please check the supplementary document for details about training data, architecture and parameters.