Parallel Multi-Dimensional LSTM, With Application to Fast Biomedical Volumetric Image Segmentation
Authors: Marijn F. Stollenga, Wonmin Byeon, Marcus Liwicki, Jürgen Schmidhuber
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach on two 3D biomedical image segmentation datasets: electron microscopy (EM) and MR Brain images. |
| Researcher Affiliation | Collaboration | 1Istituto Dalle Molle di Studi sull Intelligenza Artificiale (The Swiss AI Lab IDSIA) 2Scuola universitaria professionale della Svizzera italiana (SUPSI), Switzerland 3Universit a della Svizzera italiana (USI), Switzerland 4University of Kaiserslautern, Germany 5German Research Center for Artificial Intelligence (DFKI), Germany |
| Pseudocode | No | No explicit pseudocode or algorithm block was found. |
| Open Source Code | No | The paper does not provide an explicit statement or link to its open-source code. |
| Open Datasets | Yes | The EM dataset [12] is provided by the ISBI 2012 workshop on Segmentation of Neuronal Structures in EM Stacks [15]. The MR Brain images are provided by the ISBI 2015 workshop on Neonatal and Adult MR Brain Image Segmentation (ISBI NEATBrain S15) [13]. Citation [15] also includes a URL: http://tinyurl.com/d2fgh7g. |
| Dataset Splits | No | One stack is used for training, the other for testing. (for EM) and The dataset is divided into a training set with five volumes and a test set with fifteen volumes. (for MR Brain). No explicit mention of a validation split. |
| Hardware Specification | Yes | All experiments are performed on a desktop computer with an NVIDIA GTX TITAN X 12GB GPU. |
| Software Dependencies | No | The paper mentions using 'NVIDIA s cu DNN library' but does not specify its version number or any other software dependencies with versions. |
| Experiment Setup | Yes | We apply RMS-prop [17] with momentum. We use a decaying learning rate: λlr = 10 6 + 10 2 1 100 , which starts at λlr 10 2 and halves every 100 epochs asymptotically towards λlr = 10 6. Other hyper-parameters used are ϵ = 10 5, ρMSE = 0.9, and ρM = 0.9. Our networks contain three Pyra Mi D-LSTM layers. The first Pyra Mi D-LSTM layer has 16 hidden units followed by a fully-connected layer with 25 hidden units. ... The convolutional filter size for all Pyra Mi D-LSTM layers is set to 7 7. 50% dropout on fully connected layers and/or 20% on input layer. |