Automatic Neuron Detection in Calcium Imaging Data Using Convolutional Networks

Authors: Noah Apthorpe, Alexander Riordan, Robert Aguilar, Jan Homann, Yi Gu, David Tank, H. Sebastian Seung

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here we apply a supervised learning approach to this problem and show that convolutional networks can achieve near-human accuracy and superhuman speed. Accuracy is superior to the popular PCA/ICA method based on precision and recall relative to ground truth annotation by a human expert.
Researcher Affiliation Academia 1Computer Science Department 2Princeton Neuroscience Institute Princeton University {apthorpe, ariordan, dwtank, sseung}@princeton.edu
Pseudocode No The paper describes the convolutional network architecture and processing steps verbally and with diagrams (Figure 2), but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes A ready-to-use pipeline, including preand postprocessing, Conv Net training, and precision-recall scoring, will be publicly available for community use (https://github.com/ Noah Apthorpe/Convnet Cell Detection).
Open Datasets No Two-photon calcium imaging data were gathered from both the primary visual cortex (V1) and medial entorhinal cortex (MEC) from awake-behaving mice (Supplementary Methods). ... Human experts then annotated ROIs using the Image J Cell Magic Wand Tool [17]... The paper describes the collection and annotation of its own dataset but does not provide a direct link, DOI, or formal citation for public access to the raw or processed datasets used.
Dataset Splits Yes We divided the V1 series into 60% training, 20% validation, and 20% test sets and the MEC series into 50% training, 20% validation, and 30% test sets.
Hardware Specification Yes The network was trained for 16800 stochastic gradient descent (SGD) updates for the V1 dataset, which took approximately 1.2 seconds/update ( 5.5hrs) on an Amazon EC2 c4.8xlarge instance (Supplementary Figure 1). The 2D network was trained for 14000 SGD updates for the V1 dataset, which took approximately 0.9 seconds/update ( 3.75hrs) on an Amazon EC2 c4.8xlarge instance (Supplementary Figure 1).
Software Dependencies No We used ZNN, an open-source sliding window Conv Net package with multi-core CPU parallelism and FFT-based convolution [18]. While ZNN is mentioned as a software package, no specific version number is provided for it or any other software component, which is necessary for reproducibility.
Experiment Setup Yes The (2+1)D network was trained with softmax loss and output patches of size 120 120. The learning rate parameter was annealed by hand from 0.01 to 0.002, and the momentum parameter was annealed by hand from 0.9 to 0.5. The network was trained for 16800 stochastic gradient descent (SGD) updates for the V1 dataset...