Combinatorial Energy Learning for Image Segmentation

Authors: Jeremy B. Maitin-Shepard, Viren Jain, Michal Januszewski, Peter Li, Pieter Abbeel

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On an 11 billion voxel test set, we find that our method improves volumetric reconstruction accuracy by more than 20% as compared to two state-of-the-art baseline methods: graph-based segmentation of the output of a 3-D convolutional neural network trained to predict boundaries, as well as a random forest classifier trained to agglomerate supervoxels that were generated by a 3-D convolutional neural network.
Researcher Affiliation Collaboration Jeremy Maitin-Shepard UC Berkeley Google jbms@google.com Viren Jain Google viren@google.com Michal Januszewski Google mjanusz@google.com Peter Li Google phli@google.com Pieter Abbeel UC Berkeley pabbeel@cs.berkeley.edu
Pseudocode No The paper describes algorithms and procedures in prose (e.g., 'Using a simple greedy policy, at each step we consider all possible agglomeration actions...'), but it does not include formal pseudocode blocks or sections explicitly labeled 'Algorithm'.
Open Source Code No The paper does not contain an explicit statement that the source code for the described methodology is publicly available, nor does it provide a direct link to a code repository.
Open Datasets Yes We tested our approach on a large, publicly available electron microscopy dataset, called Janelia FIB25, of a portion of the Drosophila melangaster optic lobe. ... [13] Janelia Fly EM. https://www.janelia.org/project-team/flyem/data-and-software-release. Accessed: 2016-05-19.
Dataset Splits Yes For our experiments, we split the dataset into separate training and testing portions along the z axis: the training portion comprises z-sections 2005 5005, and the testing portion comprises z-sections 5005 8000 (about 11 billion voxels).
Hardware Specification No The paper mentions running CELIS 'on volumes approaching 1 teravoxel in a matter of hours, albeit using many thousands of CPU cores,' and refers to a 'distributed architecture' (Ref [9]), but it does not specify any exact CPU models, GPU models, or detailed hardware configurations used for the experiments.
Software Dependencies No The paper mentions various software components and concepts like 'stochastic gradient descent', 'random forest classifier', 'convolutional neural network', and 'watershed algorithm', but it does not provide specific version numbers for any libraries, frameworks, or solvers used (e.g., PyTorch 1.x, TensorFlow 2.x, scikit-learn 0.x).
Experiment Setup Yes We optimized the parameters of the network using stochastic gradient descent with log loss. We trained several different networks, varying as hyperparameters the amount of dilation of boundaries in the training data (in order to increase extracellular space) from 0 to 8 voxels and whether components smaller than 10000 voxels were excluded. ... We used parameters Tl = 0.95, Th = 0.95, Te = 0.5, and Ts = 1000 voxels. ... We used two 2048-dimensional fully-connected rectified linear hidden layers, followed by a logistic output unit, and applied dropout (with p = 0.5) after the last hidden layer.