Topology-Aware Segmentation Using Discrete Morse Theory

Authors: Xiaoling Hu, Yusu Wang, Li Fuxin, Dimitris Samaras, Chao Chen

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On diverse datasets, our method achieves superior performance on both the DICE score and topological metrics. ... Our method outperforms state-of-the-art methods in multiple topology-relevant metrics (e.g., ARI and VOI) on various 2D and 3D benchmarks.
Researcher Affiliation Academia Xiaoling Hu Stony Brook University Yusu Wang University of California, San Diego Li Fuxin Oregon State University Dimitris Samaras Stony Brook University Chao Chen Stony Brook University
Pseudocode No The paper does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block. The methodology is described in narrative text with mathematical formulations.
Open Source Code No The paper does not provide a direct link to open-source code for the methodology nor does it explicitly state that the code is being released or made available.
Open Datasets Yes Six natural and biomedical 2D datasets are used: ISBI12 (Arganda Carreras et al., 2015), ISBI13 (Arganda-Carreras et al., 2013), CREMI, Crack Tree (Zou et al., 2012), Road (Mnih, 2013) and DRIVE (Staal et al., 2004). ... We use three different biomedical 3D datasets: ISBI13, CREMI and 3Dircadb (Soler et al., 2010).
Dataset Splits Yes For all the experiments, we use a 3-fold cross-validation to tune hyperparameters for both the proposed method and other baselines, and report the mean performance over the validation set.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types) used to run the experiments. It only mentions using U-Nets.
Software Dependencies No The paper mentions using a '2D U-net' and '3D U-Net' architectures, but it does not specify any software dependencies with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x, Python 3.x).
Experiment Setup Yes Our loss has two terms, the cross-entropy term, Lbce and the DMT-loss, Ldmt: L(f, g) = Lbce(f, g)+ βLdmt(f, g), in which f is the likelihood, g is the ground truth, and β is the weight of Ldmt. ... In practice, we first pretrain the network with only the cross-entropy loss, and then train the network with the combined loss. ... For all the experiments, we use a 3-fold cross-validation to tune hyperparameters for both the proposed method and other baselines... When β = 3, the proposed DMT-loss achieves best performance 0.982 (Betti Error).