Improving Neural Network Surface Processing with Principal Curvatures
Authors: Josquin Harrison, James Benn, Maxime Sermesant
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our results show that using curvature as input leads to significant a increase in performance on segmentation and classification tasks, while allowing far less computational overhead than current methods. We then conduct extensive experiments in section 4, comparing principal curvature with three other representations in conjunction with three different NN architectures, on two segmentation datasets and one classification dataset, that shows how principal curvature enhances any state of the art model in different tasks. |
| Researcher Affiliation | Academia | Josquin Harrison Inria Sophia Antipolis josquin.harrison@inria.fr James Benn Inria Sophia Antipolis james.benn@inria.fr Maxime Sermesant Inria Sophia Antipolis maxime.sermesant@inria.fr |
| Pseudocode | No | The paper describes methods and calculations in prose and mathematical notation but does not include any pseudocode blocks or algorithm listings. |
| Open Source Code | Yes | We make all our code and experiments available at https://github.com/Inria-Asclepios/shape-nets |
| Open Datasets | Yes | Finally, we pick three tasks of varying complexity to measure the impact of each method: human segmentation [21], molecular segmentation [6], and shape classification [19]. Examples from each dataset are shown in figure 3. |
| Dataset Splits | Yes | As in the original paper, we use the SHREC07 dataset as test set. Similar to [47], we differ from [21] by evaluating on vertices rather than faces. ... We resample all meshes to 2048 points, except in the case of Diffusion Net where we kept the original discretisation. We evaluate all our baselines on 5 random splits with a train-test ratio of 80-20. ... We perform our experiments on 5 random splits. |
| Hardware Specification | Yes | Computed on an Apple M2 chip |
| Software Dependencies | No | For calculating the discrete principal curvatures via quadratic surface fitting, we have used igl s implementation... The eigendecomposition of the Laplacian is then performed with scipy. For the SHOT representation, we use the implementation in the pcl library... |
| Experiment Setup | Yes | We optimise the negative log-likelihood for 100 epochs, with the ADAM optimiser and a scheduler step every 20 epochs. We run the models for 200 epochs... We train our baselines for 100 epochs with a scheduler step at epoch 50 and optimise the cross-entropy loss with a label smoothing factor of 0.2. |