Learning Probabilistic Topological Representations Using Discrete Morse Theory

Authors: Xiaoling Hu, Dimitris Samaras, Chao Chen

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 EXPERIMENTS Our method directly makes prediction and inference on structures rather than on pixels. This can significantly benefit downstream tasks. While probabilities of structures can certainly be used for further analysis of the structural system, in this paper we focus on both automatic image segmentation and semi-automatic annotation/proofreading tasks. On automatic image segmentation, we show that direct prediction can ensure topological integrity even better than previous topology-aware losses. This is not surprising as our prediction is on structures. On semi-automatic proofreading task, we show our structure-level uncertainty can assist human annotators to obtain satisfying segmentation annotations in a much more efficient manner than previous methods.
Researcher Affiliation Academia Xiaoling Hu Stony Brook University Dimitris Samaras Stony Brook University Chao Chen Stony Brook University
Pseudocode Yes Appendix A.4 illustrates the details of persistent-homology filtered topology watershed algorithm. Algorithm 1: Persistent-Homology filtered Topology Watershed Algorithm
Open Source Code No The paper does not contain any explicit statement about releasing open-source code or a link to a code repository.
Open Datasets Yes Datasets. We use three datasets to validate the efficacy of the proposed method: ISBI13 (Arganda Carreras et al., 2013) (volume), CREMI (volume), and DRIVE (Staal et al., 2004) (vessel). More details are included in Appendix A.6.
Dataset Splits Yes We use a 3-fold cross-validation for all the methods to report the numbers over the validation set.
Hardware Specification Yes All the experiments are performed on a RTX A5000 GPU (24G Memory), and AMD EPYC 7542 32-Core Processor.
Software Dependencies No The paper does not specify software dependencies with version numbers (e.g., specific Python, PyTorch, or TensorFlow versions).
Experiment Setup Yes Ablation study of loss weights. We observe that the performances of our method are quite robust to the loss weights α and β. As the learned distribution over the persistence threshold might affect the final performances, we conduct an ablation study in terms of the weight of KL divergence loss (β) on DRIVE dataset. The results are reported in Fig. 7. When β = 10, the model achieves slightly better performance in terms of VOI (0.804 0.047, the smaller the better) than other choices. Note that, for all the experiments, we set α = 1.