Wormhole Loss for Partial Shape Matching

Authors: Amit Bracha, Thomas Dagès, Ron Kimmel

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on the benchmarks SHREC 16 [20] and PFAUST [10]. The quantitative results shown in Tables 2 and 3 indicate that our method reaches state-of-the-art performance for unsupervised partial shape correspondence.
Researcher Affiliation Academia Amit Bracha Thomas Dagès Ron Kimmel Technion Israel Institute of Technology Haifa, Israel {amit.bracha,thomas.dages}@cs.technion.ac.il
Pseudocode No No explicit pseudocode or algorithm blocks were found.
Open Source Code Yes Our code can be found at https://github.com/ABracha/Wormhole.
Open Datasets Yes We evaluate our method on the benchmarks SHREC 16 [20] and PFAUST [10].
Dataset Splits No The paper refers to using datasets for training and testing (e.g., 'Test-set CUTS HOLES Training-set CUTS HOLES' in Table 2) and mentions 'training our network for 20000 iterations', but does not explicitly provide percentages or sample counts for training, validation, and test splits, nor does it cite predefined splits for these specific training methodologies.
Hardware Specification Yes We used a V100 and it took a few minutes to compute the masks for surfaces on the PFAUST datasets.
Software Dependencies No We use as input features the xyz coordinates of each vertex along with its estimated normal. We took Diffusion Net [54] to be the feature extraction network... We follow [11], by replacing the FM layer with a direct computation of the correspondence matrix via Softmax similarity... We trained our network for 20000 iterations, with Adam optimizer [29], with a learning rate of 10 3 and a cosine annealing scheduler [35]... The paper mentions various software components (e.g., Diffusion Net, Adam optimizer), but specific version numbers for these, or other key libraries like PyTorch, are not provided in the paper's text.
Experiment Setup Yes We trained our network for 20000 iterations, with Adam optimizer [29], with a learning rate of 10 3 and a cosine annealing scheduler [35] with minimum learning rate parameter ηmin = 10 4 and maximum temperature of Tmax = 300 steps. Lastly, for post-processing, we use the test time adaptation refinement method [18], that refines the network weights separately for each pair of surfaces with 15 iterations of gradient descent.