Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis

Authors: Tianchang Shen, Jun Gao, Kangxue Yin, Ming-Yu Liu, Sanja Fidler

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments We first evaluate DMTET in the challenging application of generating high-quality animal shapes from coarse voxels. We further evaluate DMTET in reconstructing 3D shapes from noisy point clouds on Shape Net by comparing to existing state-of-the-art methods.
Researcher Affiliation Collaboration Tianchang Shen 1,2,3 Jun Gao1,2,3 Kangxue Yin 1 Ming-Yu Liu 1 Sanja Fidler1,2,3 NVIDIA1 University of Toronto2 Vector Institute3
Pseudocode No The paper describes the methods and processes but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] The code is currently quite uncleaned and requires many dependencies. We are planning to release the code after cleaning.
Open Datasets Yes Experimental Settings We collected 1562 animal models from the Turbo Squid website1... Among 1562 shapes, we randomly select 1120 shapes for training, and the remaining 442 shapes for testing. ... We use all 13 categories in Shape Net [3] core data.
Dataset Splits Yes Among 1562 shapes, we randomly select 1120 shapes for training, and the remaining 442 shapes for testing.
Hardware Specification Yes We additionally report average inference time on the same Nvidia V100 GPU.
Software Dependencies No The paper mentions using tools like Kaolin, PVCNN, and GCN, but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes To prepare the input to the network, we first voxelize the mesh into the resolution of 163, and then sample 3000 points from the surface after applying marching cubes to the 163 voxel grid. ... The final loss is a weighted sum of all five loss terms: L = λcd Lcd + λnormal Lnormal + λGLG + λSDFLSDF + λdef Ldef, where λcd, λnormal, λG, λSDF, λdef are hyperparameters (provided in the Supplement).