Deep Shells: Unsupervised Shape Correspondence with Optimal Transport

Authors: Marvin Eisenberger, Aysim Toker, Laura Leal-Taixé, Daniel Cremers

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We propose a novel unsupervised learning approach to 3D shape correspondence that builds a multiscale matching pipeline into a deep neural network. ...Finally, we show that the proposed unsupervised method significantly improves over the state-of-the-art on multiple datasets, even in comparison to the most recent supervised methods. Moreover, we demonstrate compelling generalization results by applying our learned filters to examples that significantly deviate from the training set. ...We evaluate our method on the standard benchmarks FAUST [4] and SCAPE [2]. ...We split both datasets into training sets of 80 and 51 shapes respectively and 20 test shapes each and randomly shuffle the 802 and 512 pairs during training. ...We report the matching accuracy on the test sets in Table 1 for our method and compare it to the current state-of-the-art of both axiomatic and learning approaches.
Researcher Affiliation Academia Marvin Eisenberger Technical University of Munich marvin.eisenberger@in.tum.de Aysim Toker Technical University of Munich aysim.toker@in.tum.de Laura Leal-Taixé Technical University of Munich leal.taixe@tum.de Daniel Cremers Technical University of Munich cremers@tum.de
Pseudocode No The paper describes algorithms and processes in textual form and through diagrams (Figure 1 shows a network overview), but it does not include any formally labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not include any explicit statement about releasing its source code or a direct link to a code repository for the described methodology.
Open Datasets Yes We evaluate our method on the standard benchmarks FAUST [4] and SCAPE [2]. Instead of the normal datasets, we use the more challenging remeshed versions from [31].
Dataset Splits No We split both datasets into training sets of 80 and 51 shapes respectively and 20 test shapes each and randomly shuffle the 802 and 512 pairs during training. The paper specifies training and testing splits, but it does not explicitly mention a separate validation split or how one would be derived.
Hardware Specification No The paper mentions implementing the network in PyTorch and using an Adam optimizer but does not specify any hardware details like GPU or CPU models, memory, or specific computing environments.
Software Dependencies No We implemented our network in Py Torch using Adam optimizer [18]. The paper mentions PyTorch but does not provide a specific version number. No other software dependencies are listed with versions.
Experiment Setup Yes Our pipeline takes 352 dimensional SHOT descriptors [40] as an input... The inputs to our method are normalized to a fixed square root area of 2. Furthermore, we compute 500 Laplacian eigenpairs on all inputs as a preprocessing step. ...Our spectral convolution layer uses 120 filters on the frequency domain represented with 16 cosine basis functions each... We use 200 eigenfunctions for the truncated spectral filters... we use 8 iterations from k = 6 to k = 20 on a logarithmic scale for training and a refined pipeline with up to k = 500 eigenfunctions for testing. Finally, we use a fixed number of 10 Sinkhorn projections and the Entropy regularization coefficient λ = 0.12 in Eq. (9).