End to End Trainable Active Contours via Differentiable Rendering

Authors: Shir Gur, Tal Shaharabany, Lior Wolf

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that our method outperforms the state of the art segmentation networks and deep active contour solutions in a variety of benchmarks, including medical imaging and aerial images.
Researcher Affiliation Collaboration Shir Gur & Tal Shaharabany School of Computer Science, Tel Aviv University Lior Wolf Facebook AI Research and Tel Aviv University
Pseudocode Yes Algorithm 1 Active contour training of networks E, D. Shown for a batch size of one.
Open Source Code Yes Our code is available at https://github.com/shirgur/ACDRNet.
Open Datasets Yes We consider two publicly available datasets in order to evaluate our method, the Vaihingen (Rottensteiner et al.) dataset, which contains buildings from a German city, and the Bing Huts dataset (Marcos et al., 2018), which contains huts from a village in Tanzania. We evaluate our method using two common mammographic mass segmentation datasets, the INBreast (Moreira et al., 2012), DDSM-BCRP (Heath et al., 1998), and a cardiac MR left ventricle segmentation datasets, the SCD (Radau et al., 2009). Following Ling et al. (2019), we employ the Cityscapes dataset (Cordts et al., 2016) to evaluate our model in the task of segmenting street images.
Dataset Splits Yes The Vaihingen dataset... is divided into 100 buildings for training, and the remaining 68 for testing. The Bing Huts dataset consists of 606 images, 335 images for train and 271 images for test. For the mammographic dataset, we follow previous work and use the expert ROIs, which were manually extracted, and the same train/test split as Zhu et al. (2018); Li et al. (2018). The Cityscapes dataset... and the experiments employ the train/val/test split of Castrejon et al. (2017).
Hardware Specification No The paper does not provide specific hardware details such as GPU models (e.g., NVIDIA A100, RTX 2080 Ti), CPU models (e.g., Intel Core i7, Xeon), or detailed cloud/cluster specifications used for running its experiments.
Software Dependencies No The paper mentions software components like U-Net and ADAM optimizer, but it does not provide specific version numbers for any software, libraries, or frameworks used (e.g., 'Python 3.8, PyTorch 1.9').
Experiment Setup Yes For training the segmentation networks, we use the ADAM optimizer Kingma & Ba (2014) with a learning rate 0.001, batch size varies depending of input image size, for 64 64 we use 100, 128 128 we use 50. We set λ1 = 10 2 and λ2 = 5 10 1. For the initial contour... we simply use a fixed circle centered at the middle of the input image, with a diameter of 16 pixels, across all datasets. For both datasets, we augment the training data (of the networks) by re-scaling in factors of [0.75, 1, 1.25, 1.5], and rotating by [0, 15, 45, 60, 90, 135, 180, 210, 240, 270] degrees.