Recurrent neural circuits for contour detection

Authors: Drew Linsley*, Junkyung Kim*, Alekh Ashok, Thomas Serre

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated γ-Net performance on two contour detection tasks: object contour detection in natural images (BSDS500 dataset; Arbeláez et al., 2011) and cell membrane detection in serial electron microscopy (SEM) images of mouse cortex (Kasthuri et al., 2015) and mouse retina (Ding et al., 2016). We validated the γ-Net against the BDCN after training on a full and augmented BSDS training set (Xie & Tu, 2017). The γ-Net performed similarly in F1 ODS (0.802) as the BDCN (0.806) and humans (0.803), and outperformed all other approaches to BSDS (Fig. 2a; Deng et al. 2018; Xie & Tu 2017; Hallman & Fowlkes 2015; Kokkinos 2015; Wang et al. 2019; Liu et al. 2019).
Researcher Affiliation Collaboration Drew Linsley , Junkyung Kim , Alekh Ashok & Thomas Serre Department of Cognitive, Linguistic and Psychological Sciences Brown University Providence, RI 02912, USA {drew_linsley,alekh_ashok,thomas_serre}@brown.edu junkyung@google.com
Pseudocode Yes Appendix A for an algorithmic description of γ-net.
Open Source Code No The paper does not state that the source code for the described methodology is publicly available or provide a link to a repository.
Open Datasets Yes We evaluated γ-Net performance on two contour detection tasks: object contour detection in natural images (BSDS500 dataset; Arbeláez et al., 2011) and cell membrane detection in serial electron microscopy (SEM) images of mouse cortex (Kasthuri et al., 2015) and mouse retina (Ding et al., 2016). SNEMI3D images and annotations are publicly available (Kasthuri et al., 2015), whereas the Ding dataset is a volume from (Ding et al., 2016) that we annotated.
Dataset Splits Yes The dataset contains object-contour annotations for 500 natural images, which are split into train (200), validation (100), and test (200) sets.
Hardware Specification Yes The γ-Nets were trained with Tensorflow and NVIDIA Titan RTX GPUs using single-image batches and the Adam optimizer (Kingma & Ba, 2014, dataset-specific learning rates are detailed below). When training on an NVIDIA Ge Force RTX, this γ-Nettakes 1.8 seconds per image, whereas the BDCN takes 0.1 seconds per image.
Software Dependencies No The paper states 'The γ-Nets were trained with Tensorflow', but it does not specify the version number for TensorFlow or any other ancillary software libraries used, which is required for reproducibility.
Experiment Setup Yes All γ-Nets use 8-timesteps of recurrence and instance normalization (normalization controls vanishing gradients in RNN training; Ulyanov et al., 2016; Cooijmans et al., 2017, see Appendix A for details). The γ-Nets were trained with Tensorflow and NVIDIA Titan RTX GPUs using single-image batches and the Adam optimizer (Kingma & Ba, 2014, dataset-specific learning rates are detailed below). Models were trained with early stopping, which terminated training if the validation loss did not drop for 50 straight epochs. The γ-Net was trained with learning rates of 3e-4 on its randomly initialized f GRU weights and 1e-5 on its VGG-initialized weights.