Topology-Preserving Deep Image Segmentation

Authors: Xiaoling Hu, Fuxin Li, Dimitris Samaras, Chao Chen

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method is empirically validated by comparing with state-of-the-arts on natural and biomedical datasets with fine-scale structures. It achieves superior performance on metrics that encourage structural accuracy. In particular, our method significantly outperforms others on the Betti number error which exactly measures the topological accuracy.
Researcher Affiliation Academia Xiaoling Hu1, Li Fuxin2, Dimitris Samaras1 and Chao Chen1 1Stony Brook University 2Oregon State University
Pseudocode No The paper does not include structured pseudocode or algorithm blocks. It describes the computational steps and gradients in paragraph text.
Open Source Code No The paper does not contain any explicit statement or link for the open-source code of the methodology described.
Open Datasets Yes We evaluate our method on six natural and biomedical datasets: CREMI7, ISBI12 [4], ISBI13 [3], Crack Tree [48], Road [28] and DRIVE [39]. The citations for these datasets imply their public availability.
Dataset Splits Yes For all datasets, we use a three-fold cross-validation and report the mean performance over the validation set.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run the experiments. It only details the network architecture and training patch sizes.
Software Dependencies No The paper mentions using a 'deep neural network' and 'cross-entropy loss' but does not specify any software libraries or versions (e.g., PyTorch 1.9, TensorFlow 2.x) with version numbers.
Experiment Setup Yes Our network contains six trainable weight layers, four convolutional layers and two fully connected layers. The first, second and fourth convolutional layers are followed by a single max pooling layer of size 2 × 2 and stride 2 by the end of the layer. Particularly, because of the computational complexity, we use a patch size of 65 × 65 during all the training process. ... For convenience, we drop the weight of cross entropy loss and weight the topological loss with λ. ... In general, λ is at the magnitude of 1/10000.