Point2SSM: Learning Morphological Variations of Anatomies from Point Clouds

Authors: Jadie Adams, Shireen Elhabian

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct a benchmark of state-of-the-art point cloud deep networks on the SSM task, revealing their limited robustness to clinical challenges such as noisy, sparse, or incomplete input and limited training data.
Researcher Affiliation Academia Jadie Adams & Shireen Y. Elhabian Scientific Computing and Imaging Institute Kalhert School of Computing University of Utah, USA {jadie,shireen}@sci.utah.edu
Pseudocode No The paper describes the architecture and loss functions but does not include explicit pseudocode or an algorithm block.
Open Source Code Yes The source code is provided at https://github.com/jadie1/Point2SSM.
Open Datasets Yes We utilize three challenging organ mesh datasets of various sample sizes to benchmark the performance of Point2SSM and the comparison methods: spleen (Simpson et al., 2019) (40 shapes), pancreas (Simpson et al., 2019) (272 shapes), and left atrium of the heart (1096 shapes).
Dataset Splits Yes The datasets are randomly split into a training, validation, and test set using an 80%, 10%, 10% split.
Hardware Specification Yes A 4x TITAN V GPU was used to train all models.
Software Dependencies No The paper mentions 'Adam optimization' but does not specify version numbers for any software libraries or dependencies (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes In all experiments, we set N = 1024, L = 128, M = 1024, and batch size B = 8, unless otherwise specified. Adam optimization with a constant learning rate of 0.0001 is used, and model training is run until convergence via validation assessment. Specifically, a model is considered converged if the validation CD has not improved in 100 epochs.