NeuralClothSim: Neural Deformation Fields Meet the Thin Shell Theory

Authors: Navami Kairanda, Marc Habermann, Christian Theobalt, Vladislav Golyanik

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We next present the qualitative and empirical results highlighting the new characteristics of our continuous neural fields, including validation (Sec. 5.1), simulation results (Sec. 5.2), comparison to prior works (Sec. 5.3), and applications (Sec. 5.4).
Researcher Affiliation Academia Max Planck Institute for Informatics, Saarland Informatics Campus
Pseudocode No The paper describes the method in prose and mathematical equations but does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes We provide the source code as part of the supplemental document and plan to release it publicly, if the paper is accepted.
Open Datasets No We do not contribute any dataset.
Dataset Splits No The paper does not specify traditional dataset splits (e.g., train/validation/test percentages or counts) as it uses a continuous domain for sampling points (NΩ= 20x20 and Nt = 20) and validates against analytical solutions of benchmark problems.
Hardware Specification Yes We run our simulator on a single NVIDIA Quadro RTX 8000 GPU with 48 GB of global memory.
Software Dependencies No We implement Neural Cloth Sim in Py Torch [47] and compute the geometric quantities on the reference shape and on the NDF using its tensor operations; the first and second-order derivatives are calculated using automatic differentiation.
Experiment Setup Yes Our network architecture for NDF is an MLP with sine activations (SIREN) [53] with five hidden layers and 512 units in each layer. We empirically set SIREN s frequency parameter to ω0 = 30 for all experiments... For training, we use NΩ= 20 20 and Nt = 20... Neural Cloth Sim s training time amounts to 10 30 minutes for most experiments, and the number of training iterations equals 2000 5000. We use ADAM [30] optimiser with a learning rate of 10 4.