Topologically Faithful Image Segmentation via Induced Matching of Persistence Barcodes
Authors: Nico Stucki, Johannes C. Paetzold, Suprosanna Shit, Bjoern Menze, Ulrich Bauer
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that it improves the topological performance of segmentation networks significantly across six diverse datasets while preserving the performance with respect to traditional scores. Our code is publicly available1. ... Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. ... 4. Experiments with Betti matching |
| Researcher Affiliation | Academia | 1TUM School of Computation, Information and Technology, Technical University of Munich, Germany 2Munich Data Science Institute, Germany 3Munich Center for Machine Learning, Germany 4Department of Computing, Imperial College London, United Kingdom 5Department of Neuroradiology, Klinikum rechts der Isar, Germany 6Department of Quantitative Biomedicine, University of Zurich, Switzerland. |
| Pseudocode | Yes | Algorithm 1: Betti matching Data: G, L Option: relative = False, filtration = superlevel Result: L0, L1, L |
| Open Source Code | Yes | Our code is publicly available1. 1https://github.com/nstucki/ Betti-matching/ |
| Open Datasets | Yes | We employ a set of six datasets with diverse topological features for our validation experimentation. Two datasets, the Massachusetts roads dataset and the CREMI neuron segmentation dataset... The C.elegans infection live/dead image dataset (Elegans) from the Broad Bioimage Benchmark Collection (Ljosa et al., 2012) and our synthetic, modified MNIST dataset (Le Cun, 1998) (syn Mnist)... colon cancer cell dataset (Colon) from the Broad Bioimage Benchmark Collection (Carpenter et al., 2006; Ljosa et al., 2012) and the Massachusetts buildings dataset (Buildings) (Mnih, 2013) |
| Dataset Splits | Yes | We train all our models for a fixed, dataset-specific number of epochs and evaluate the final model on an unseen test set. ... For the buildings dataset (Mnih, 2013), we downsample the images to 375 × 375 pixels and randomly choose 80 samples for training and 20 for testing. For each epoch, we randomly sample 8 patches from each sample. ... For the Colon dataset... we randomly choose 20 samples for training and 4 for testing. ... For the CREMI dataset... we choose 100 samples for training and 25 for testing. ... For the Elegans dataset... we randomly choose 80 samples for training and 20 for testing. ... For the syn Mnist dataset... we train on 4500 full, randomly chosen images and use 1500 for testing. ... For the Roads dataset... we randomly choose 100 samples for training and 24 for testing. |
| Hardware Specification | Yes | We train all models on an Nvidia P8000 GPU using Adam optimizer. |
| Software Dependencies | No | The paper mentions software like 'Adam optimizer' and 'skimage python-library' but does not specify any version numbers for these or other software components. |
| Experiment Setup | Yes | We train all our models for a fixed, dataset-specific number of epochs and evaluate the final model on an unseen test set. We train all models on an Nvidia P8000 GPU using Adam optimizer. We run experiments on a range of alpha-parameters for cl Dice (Shit et al., 2021), the Wasserstein matching (Hu et al., 2019), and Betti matching; we choose to present the top performing model in Table 1; extended results are given in tables 3, 4, 5, 6, 7 in App. J. |