Learning shape correspondence with anisotropic convolutional neural networks

Authors: Davide Boscaini, Jonathan Masci, Emanuele Rodolà, Michael Bronstein

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We tested ACNNs performance in challenging settings, achieving state-of-the-art results on recent correspondence benchmarks. In this section, we evaluate the proposed ACNN method and compare it to state-of-the-art approaches. In all experiments, we used L = 16 orientations and the anisotropy parameter = 100. For all experiments, training was done by minimizing the loss (10). Full mesh correspondence We used the FAUST humans dataset [3]. Partial correspondence We used the recent very challenging SHREC 16 Partial Correspondence benchmark [7].
Researcher Affiliation Collaboration Davide Boscaini1, Jonathan Masci1, Emanuele Rodol a1, Michael Bronstein1,2,3 1USI Lugano, Switzerland 2Tel Aviv University, Israel 3Intel, Israel
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions that "Neural networks were implemented in Theano [2]" but does not provide a link or explicit statement about the availability of their own ACNN source code.
Open Datasets Yes We used the FAUST humans dataset [3], containing 100 meshes of 10 scanned subjects, each in 10 different poses. We used the recent very challenging SHREC 16 Partial Correspondence benchmark [7], consisting of nearly-isometrically deformed shapes from eight classes, with different parts removed.
Dataset Splits Yes First 80 shapes for training and the remaining 20 for testing, following verbatim the settings of [16]. The dataset was split into training and testing disjoint sets. For cuts, training was done on 15 shapes per class; for holes, training was done on 10 shapes per class.
Hardware Specification Yes For shapes with 6.9K vertices, Laplacian computation and eigendecomposition took 1 sec and 4 seconds per angle, respectively on a desktop workstation with 64Gb of RAM and i7-4820K CPU.
Software Dependencies No The paper states that "Neural networks were implemented in Theano [2]" but does not provide specific version numbers for Theano or any other software dependencies.
Experiment Setup Yes In all experiments, we used L = 16 orientations and the anisotropy parameter = 100. Neural networks were implemented in Theano [2]. The ADAM [11] stochastic optimization algorithm was used with initial learning rate of 10 3, β1 = 0.9, and β2 = 0.999. For this experiment, we adopted the following architecture inspired by GCNN [16]: FC64+IC64+IC128+IC256+FC1024+FC512+Softmax. The dropout regularization, with drop = 0.5, was crucial to avoid overfitting on such a small training set. We used the following ACNN architecture: IC32+FC1024+DO(0.5)+FC2048+DO(0.5)+Softmax.