Deep Contextual Networks for Neuronal Structure Segmentation

Authors: Hao Chen, Xiao Qi, Jie Cheng, Pheng Heng

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the benchmark dataset of 2012 ISBI segmentation challenge of neuronal structures suggest that the proposed method can outperform the state-of-the-art methods by a large margin with respect to different evaluation measurements.
Researcher Affiliation Academia Department of Computer Science and Engineering, The Chinese University of Hong Kong School of Medicine, Shenzhen University, China Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
Pseudocode No The paper describes the method and architecture in natural language and via a diagram, but it does not include a structured pseudocode or algorithm block.
Open Source Code No The paper mentions 'open-source framework of Caffe library' (which is a third-party tool) and provides a link to 'probability maps including training and testing data' (http://appsrv.cse.cuhk.edu.hk%7Ehchen/research/2012isbi seg.html), but there is no explicit statement about releasing the source code for their own method.
Open Datasets Yes We evaluated our method on the public dataset of 2012 ISBI EM Segmentation Challenge (Ignacio et al. 2012), which is still open for submissions. The training dataset contains a stack of 30 slices from a ss TEM dataset of the Drosophila first instar larva ventral nerve cord (VNC), which measures approximately 2x2x1.5 microns with a resolution of 4x4x50 nm/voxel. The images were manually annotated in the pixellevel by a human neuroanatomist using the software tool Trak Em2 (Cardona et al. 2012). The ground truth masks of training data were provided while those of testing data with 30 slices were held out by the organizers for evaluation. We evaluated the performance of our method by submitting results to the online testing system. ... 1Please refer to the leader board for more details: http:// brainiac2.mit.edu/isbi challenge/leaders-board
Dataset Splits No The paper states 'The training dataset contains a stack of 30 slices... The ground truth masks of training data were provided while those of testing data with 30 slices were held out by the organizers for evaluation.' It mentions using the training data for parameter tuning ('The parameter wf is determined by obtaining the optimal result of rand error on the training data in our experiments.'), but does not explicitly define a separate validation set split.
Hardware Specification Yes The training time on the augmentation dataset took about three hours using a standard PC with a 2.50 GHz Intel(R) Xeon(R) E5-1620 CPU and a NVIDIA Ge Force GTX Titan X GPU.
Software Dependencies No The proposed method was implemented with the mixed programming technology of Matlab and C++ under the open-source framework of Caffe library (Jia et al. 2014). The paper mentions 'Matlab', 'C++', and 'Caffe library' with a citation, but does not provide specific version numbers for these software components.
Experiment Setup Yes We randomly cropped a region (size 480 480) from the original image as the input into the network and trained it with standard back-propagation using stochastic gradient descent (momentum = 0.9, weight decay = 0.0005, the learning rate was set as 0.01 initially and decreased by a factor of 10 every two thousand iterations). The parameter of corresponding discount weight wc was set as 1 initially and decreased by a factor of 10 every ten thousand iterations till a negligible value 0.01.