Learning elementary structures for 3D shape generation and matching

Authors: Theo Deprelle, Thibault Groueix, Matthew Fisher, Vladimir Kim, Bryan Russell, Mathieu Aubry

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on two tasks: reconstructing Shape Net objects and estimating dense correspondences between human scans (FAUST inter challenge). We show 16% improvement over surface deformation approaches for shape reconstruction and outperform FAUST inter and intra challenge state of the art by 2% and 7%, respectively.
Researcher Affiliation Collaboration Theo Deprelle1 , Thibault Groueix1, Matthew Fisher2, Vladimir G. Kim2, Bryan C. Russell2, Mathieu Aubry1 1LIGM (UMR 8049), École des Ponts, UPE, 2Adobe Research
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available on our project webpage1 http://imagine.enpc.fr/ deprellt/atlasnet2
Open Datasets Yes We evaluate on the Shape Net Core dataset [6]. We train our method using the SURREAL dataset [29], extended to include some additional bend-over poses as in 3D-CODED [10]. To evaluate correspondences on real data, we use the FAUST benchmark [5] consisting of 200 testing scans with 170k vertices from the inter challenge...
Dataset Splits Yes For single-category reconstruction, we evaluated over airplane (5424/1360 train/test shapes) and chair (3248/816) categories. For multi-category reconstruction, we used 13 categories airplane, bench, cabinet, car, chair, monitor, lamp, speaker, firearm, couch, table, cellphone, watercraft (31760/7952). We use 229,984 SURREAL meshes of humans in various poses for training and 224 SURREAL meshes to test reconstruction quality.
Hardware Specification Yes We train our model on an NVIDIA 1080Ti GPU, with a 16 core Intel I7-7820X CPU (3.6GHz), 126GB RAM and SSD storage.
Software Dependencies No The paper mentions software components like 'Point Net network', 'multi-layer perceptron', and 'Adam optimizer', but it does not specify version numbers for these or other software libraries (e.g., Python, TensorFlow, PyTorch versions).
Experiment Setup Yes We use the Adam optimizer with a learning rate of 0.001, a batch size of 16, and batch normalization layers. We train our method using input point clouds of 2500 points when correspondences are not available and 6800 points when correspondences are available.