Neural Shape Deformation Priors

Authors: Jiapeng Tang, Lev Markhasin, Bi Wang, Justus Thies, Matthias Niessner

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our approach in experiments using the Deforming Thing4D dataset, and compare to both classic optimization-based and recent neural network-based methods.Our experiments and ablation studies demonstrate that our method can be applied to challenging new deformations.
Researcher Affiliation Collaboration Jiapeng Tang1 Lev Markhasin2 Bi Wang2 Justus Thies3 Matthias Nießner1 1 Technical University of Munich 2 Sony Europe RDC Stuttgart 3 Max Planck Institute for Intelligent Systems, Tübingen, Germany
Pseudocode No The paper does not contain structured pseudocode or an algorithm block.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See the supplemental material.
Open Datasets Yes Our experiments are performed on the Deforming Thing4D-Animals [39] dataset which contains 1494 non-rigidly deforming animations...In addition, we also include comparisons on another animal dataset used in TOSCA [71].
Dataset Splits No For the train/test split, we divide all animations into training (1296) and test (198). The paper specifies train and test data splits but does not explicitly provide details for a separate validation split.
Hardware Specification No The paper mentions running experiments but does not provide specific details on the hardware used, such as GPU/CPU models or cluster specifications, within the main text.
Software Dependencies No The paper mentions using "PyTorch" and "Adam" but does not specify their exact version numbers required for reproducibility.
Experiment Setup Yes We use an Adam [73] optimizer with β1 = 0.9, β2 = 0.999, and ϵ = 10 8. In the first stage, we train the forward and backward deformation networks individually. Specifically, the backward and forward deformation networks are respectively optimized by the objective described in Equations 6 or 7 using a batch size of 16 with the learning rate of 5e-4 for 100 epochs. In the second stage, the whole model is trained according to Equation 8 in an end-to-end manner using a batch size of 6 with a learning rate of 5e-5 for 20 epochs.