Learning Deformable Tetrahedral Meshes for 3D Reconstruction

Authors: Jun Gao, Wenzheng Chen, Tommy Xiang, Alec Jacobson, Morgan McGuire, Sanja Fidler

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that it can represent arbitrary, complex topology, is both memory and computationally efficient, and can produce high-fidelity reconstructions with a significantly smaller grid size than alternative volumetric approaches. The predicted surfaces are also inherently defined as tetrahedral meshes, thus do not require post-processing. We demonstrate that DEFTET matches or exceeds both the quality of the previous best approaches and the performance of the fastest ones. Our approach obtains high-quality tetrahedral meshes computed directly from noisy point clouds, and is the first to showcase high-quality 3D tet-mesh results using only a single image as input. Our project webpage: https://nv-tlabs.github.io/DefTet/. To demonstrate the effectiveness of DEFTET, we use the Shape Net core dataset [4].
Researcher Affiliation Collaboration NVIDIA1 University of Toronto2 Vector Institute3
Pseudocode No The paper describes its methods in prose and mathematical formulations but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Our project webpage: https://nv-tlabs.github.io/DefTet/.
Open Datasets Yes We use the Shape Net core dataset [4].
Dataset Splits No The paper mentions using a “validation set” for threshold optimization and a “test set” for evaluation, and uses standard datasets like ShapeNet. However, it does not explicitly provide the specific training/validation/test split percentages or sample counts for its experiments in the main text.
Hardware Specification Yes We also evaluate inference time of all the methods on the same Nvidia V100 GPU. with respect to the training time on NVIDIA RTX 2080 GPU.
Software Dependencies No The paper mentions using Kaolin [18], the official codebase from [34], and Marching Tet [9]. However, it does not provide specific version numbers for any of these software dependencies.
Experiment Setup No The paper mentions hyperparameters (e.g., “Lambdas are hyperparameters that weight the influence of each of the terms”) and discusses different architectures and loss functions. However, it defers detailed experimental setup, including specific hyperparameter values, to the Appendix, stating, “Details about all architectures are provided in the Appendix.” The main text lacks concrete values for hyperparameters like learning rate, batch size, or number of epochs.