Weakly Supervised Deep Functional Maps for Shape Matching

Authors: Abhishek Sharma, Maks Ovsjanikov

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show empirically the minimum components for obtaining state-of-the-art results with different loss functions, supervised as well as unsupervised. Furthermore, we propose a novel framework designed for both full-to-full as well as partial to full shape matching that achieves state of the art results on several benchmark datasets outperforming, even the fully supervised methods.
Researcher Affiliation Academia Abhishek Sharma LIX, École Polytechnique kein.iitian@gmail.com Maks Ovsjanikov LIX, École Polytechnique maks@lix.polytechnique.fr
Pseudocode No The paper describes its methods in prose and mathematical formulations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is publicly available at https: //github.com/Not-IITian/Weakly-supervised-Functional-map
Open Datasets Yes For a fair comparison with Donati et al. [2020], we follow the same experimental setup and test our method on a wide spectrum of datasets: first, the re-meshed versions of FAUST dataset Bogo et al. [2014] and the SCAPE Anguelov et al. [2005], made publicly available by Ren et al. [2018]. Lastly, we also use the training dataset of 3D-CODED, consisting in 230K synthetic shapes generated using SURREAL Varol et al. [2017] with the parametric model SMPL introduced in Loper et al. [2015]. ... Finally, we quantitatively evaluate our method in the partial matching scenario on the challenging SHREC 16 Partial Correspondence benchmark Cosmo et al. [2016].
Dataset Splits Yes Following the standard protocol, we split FAUST re-meshed and SCAPE re-meshed into training and test sets containing 80 and 20 shapes for FAUST, and 51 and 20 shapes for SCAPE. ... We use some of these shapes as a validation set and separate them from training or test set.
Hardware Specification No The paper does not provide specific details about the hardware used for the experiments, such as GPU models, CPU models, or memory specifications. It only mentions the software framework 'Tensor Flow'.
Software Dependencies No We implemented our method in Tensor Flow Abadi et al. [2015]. The paper mentions TensorFlow but does not provide a specific version number for it or any other software dependencies.
Experiment Setup Yes We implemented our method in Tensor Flow Abadi et al. [2015]. We train our network with a batch size of 24 shape pairs for 10000 steps. We use a learning rate of 1e 4 with Adam optimizer. During training, we randomly sample 4000 points from each shape while training with Surreal dataset whose shapes contain 7000 points each. For other datasets such as Scape and Faust remesh, that contain roughly 5000 points each, we randomly sample 3000 points during for training.