Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing

Authors: Qihua Chen, Xuejin Chen, Chenxuan Wang, Yixiong Liu, Zhiwei Xiong, Feng Wu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive comparisons of different combination schemes of image and morphological representation in identifying split errors across the whole fly brain demonstrate the superiority of the proposed approach, especially for the locations that contain severe imaging artifacts, such as section missing and misalignment.
Researcher Affiliation Academia Qihua Chen, Xuejin Chen*, Chenxuan Wang, Yixiong Liu, Zhiwei Xiong, Feng Wu National Engineering Laboratory for Brain-inspired Intelligence Technology and Application, University of Science and Technology of China, Hefei 230027, China. Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China.
Pseudocode No The paper describes methods and includes mathematical formulas but does not provide a clearly labeled pseudocode or algorithm block.
Open Source Code Yes The dataset and code are available at https://github.com/Levishery/Flywire-Neuron-Tracing.
Open Datasets Yes The dataset and code are available at https://github.com/Levishery/Flywire-Neuron-Tracing. The source EM images for Fly Tracing are from a complete adult Drosophila brain, imaged at 4 × 4nm resolution and sectioned with the thickness of 40nm, known as the full adult fly brain (FAFB) dataset (Zheng et al. 2018).
Dataset Splits Yes 1, 000 blocks are selected randomly as the training and validation set for the image embedding network and pairwise connectivity prediction models.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper mentions using "Point Net++ (Qi et al. 2017)" and that "Our Embed Net follows the architecture of residual symmetric U-Net (Lee et al. 2017)" but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes We set the input volume size as 129 × 129 × 17 under the voxel resolution of 16 × 16 × 40nm3, and the embedding dimension k = 16. We train the embedding network with Adam W optimizer for 500k iterations with a batch size of 8, and apply data augmentation including random rotation, rescale, flip, and grayscale intensity augmentation. The initial learning rate is set to 0.002 with warmup and a step decay scheduler. The number of negative sample pairs n = 20.