Non-Rigid Shape Registration via Deep Functional Maps Prior

Authors: Puhua Jiang, Mingze Sun, Ruqi Huang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results show that, with as few as dozens of training shapes of limited variability, our pipeline achieves state-of-the-art results on several benchmarks of non-rigid point cloud matching, but also delivers high-quality correspondences between unseen challenging shape pairs that undergo both significant extrinsic and intrinsic deformations, in which case neither traditional registration methods nor intrinsic methods work.
Researcher Affiliation Academia Puhua Jiang1,2# Mingze Sun1# Ruqi Huang1 1. Tsinghua Shenzhen International Graduate School, China 2. Peng Cheng Lab, China
Pseudocode Yes Algorithm 1: Shape registration pipeline. Input: Source mesh S = {V, E} and target point cloud T ;Trained point feature extractor F Output: X converging to a local minimum of Etotal; Deformed source model {V , E}; Correspondence Π ST , Π T S between S and T .
Open Source Code Yes The code is available at https://github.com/rqhuang88/DFR.
Open Datasets Yes Datasets: We evaluate our method and several state-of-the-art techniques for estimating correspondences between deformable shapes on an array of benchmarks as follows: FAUST_r: The remeshed version of FAUST dataset [4], which consists of 100 human shapes (10 individuals performing the same 10 actions). We split the first 80 as training shapes and the rest as testing shapes; SCAPE_r: The remeshed version of SCAPE dataset [2], which consists 71 human shapes (same individual in varying poses). We split the first 51 as training shapes and the rest as testing shapes; SHREC19_r: The remehsed version of SHREC19 dataset [36]...
Dataset Splits No The paper describes training and testing splits, for example, 'We split the first 80 as training shapes and the rest as testing shapes' for FAUST_r. However, it does not explicitly specify a separate validation split or its size for hyperparameter tuning.
Hardware Specification No The paper does not explicitly state the specific hardware used for its experiments, such as GPU or CPU models. It mentions a V100 GPU in the context of a baseline's memory limitations, but not for their own experimental setup.
Software Dependencies No The paper states 'We implement our framework in Py Torch' and 'We use modified DGCNN [24] the backbone of our feature extractor.' However, specific version numbers for these software dependencies are not provided.
Experiment Setup Yes For training the DFM network, in Eqn.(6) of the main text, we empirically set λbij = 1.0, λorth = 1.0, λalign = 1e-4, λNCE = 1.0. We train our feature extractor with the Adam optimizer with a learning rate equal to 2e-3. The batch size is chosen to be 4 for all datasets. Regarding the registration optimization, in Eqn.(12) of the main text, we empirically set λcd = 0.01, λcorr = 1.0, λarap = 20 in Stage-I and λcd = 1.0, λcorr = 0.01, λarap = 1 in Stage-II. For α in Eqn.(9) of the main text, we set α = 0.2.