Learned versus Hand-Designed Feature Representations for 3d Agglomeration

Authors: John A. Bogovic; Gary B. Huang; Viren Jain

ICLR 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate a large set of hand-designed 3d feature descriptors alongside features learned from the raw data using both end-to-end and unsupervised learning techniques, in the context of agglomeration of 3d neuron fragments. By combining unsupervised learning techniques with a novel dynamic pooling scheme, we show how pure learning-based methods are for the first time competitive with hand-designed 3d shape descriptors. We investigate data augmentation strategies for dramatically increasing the size of the training set, and show how combining both learned and hand-designed features leads to the highest accuracy.
Researcher Affiliation Academia John A. Bogovic, Gary B. Huang & Viren Jain Janelia Farm Research Campus Howard Hughes Medical Institute 19700 Helix Drive, Ashburn, VA, USA {bogovicj, huangg, jainv}@janelia.hhmi.org
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets No The paper describes using custom data: 'Tissue from a drosophila melanogaster brain was imaged using focused ion-beam scanning electron microscopy (FIB-SEM [21]) at a resolution of 8 8 8 nm. ... We supplied the DAWMR network with 120 megavoxels of hand-segmented image data for training'. No concrete access information for this dataset is provided.
Dataset Splits No The paper describes training and test sets: 'One of the two volumes was randomly chosen to be the training set (14, 522 edges: 7968 positive and 6584 negative), and the other volume serves as a test set (14, 829 edges: 8342 positive and 6487 negative)'. A separate validation set is not explicitly described with specific details.
Hardware Specification No The paper mentions the hardware used for image acquisition ('focused ion-beam scanning electron microscopy (FIB-SEM)'), but it does not specify any computational hardware (e.g., GPUs, CPUs, servers) used for running the experiments.
Software Dependencies No The paper mentions software components like 'dropout multilayer perceptron (MLP) [16]', 'decision-stump boosting classifier [13]', and 'DAWMR network [17]', citing the relevant papers. However, it does not provide specific version numbers for these software dependencies (e.g., Python 3.x, TensorFlow 2.x, PyTorch 1.x).
Experiment Setup Yes As our classifier, we use a drop-out multilayer perceptron (200 hidden units, 500, 000 weight updates, rectified linear hidden units) [16], but also present results using a decision-stump boosting classifier [13].