Recursive Training of 2D-3D Convolutional Networks for Neuronal Boundary Prediction

Authors: Kisuk Lee, Aleksandar Zlateski, Vishwanathan Ashwin, H. Sebastian Seung

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here we achieve a substantial gain in accuracy through three innovations. The pixel-wise classification error of each model on test set was 10.63% (N4), 9.77% (VD2D), and 8.76% (VD2D3D).
Researcher Affiliation Academia Kisuk Lee, Aleksandar Zlateski Massachusetts Institute of Technology {kisuklee,zlateski}@mit.edu Ashwin Vishwanathan, H. Sebastian Seung Princeton University {ashwinv,sseung}@princeton.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes We trained our networks with ZNN (https: //github.com/seung-lab/znn-release, [17]), which uses multicore CPU parallelism for speed.
Open Datasets Yes Therefore we make the annotated dataset publicly available (http://seunglab.org/data/).
Dataset Splits No In all experiments we used stack1 for testing, stack2 and stack3 for training, and stack4 as an additional training data for recursive training. It does not explicitly mention a separate validation split.
Hardware Specification No The paper mentions 'multicore CPU parallelism' for ZNN, but does not specify exact CPU models, memory details, or any other specific hardware components used for running the experiments.
Software Dependencies No The paper mentions 'ZNN' and provides a GitHub link, but it does not specify version numbers for ZNN or any other software dependencies, which is required for reproducibility.
Experiment Setup Yes We always used the fixed learning rate of 0.01 with the momentum of 0.9. When updating weights we divided the gradient by the total number of pixels in an output patch, similar to the typical minibatch averaging. We first trained N4 with an output patch of size 200 200 1 for 90K gradient updates. Next, we trained VD2D with 150 150 1 output patches... After 60K updates, we evaluated the trained VD2D on the training stacks to obtain preliminary boundary maps, and started training VD2D3D with 100 100 1 output patches... We trained VD2D3D for 90K updates.