Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement Learning
Authors: Yinda Chen, Wei Huang, Shenglong Zhou, Qi Chen, Zhiwei Xiong
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments conducted on representative EM datasets demonstrate that our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation. |
| Researcher Affiliation | Academia | Yinda Chen1,2 , Wei Huang1 , Shenglong Zhou1 , Qi Chen1 , Zhiwei Xiong1,2, 1University of Science and Technology of China 2Institute of Artificial Intelligence, Hefei Comprehensive National Science Center {cyd0806, weih527, slzhou96, qic}@mail.ustc.edu.cn, zwxiong@ustc.edu.cn |
| Pseudocode | No | The paper describes the methods in prose and uses diagrams, but does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/ydchen0806/db Mi M. |
| Open Datasets | Yes | The Full Adult Fly Brain (FAFB) dataset [Zheng et al., 2018] is a highly valuable resource for neuroinformatics research, offering a comprehensive and detailed view of the neural architecture of the Drosophila melanogaster (fruit fly) brain. |
| Dataset Splits | Yes | Each sub-volume has 125 slices of 1250 1250 images, and we choose the first 60 slices for training, 15 slices for validation, and the remaining 50 slices for testing |
| Hardware Specification | Yes | perform batch size 16 pretraining on 8 RTX 3090s. |
| Software Dependencies | No | The paper mentions software components such as Vi T, UNETR, and the Adam optimizer, but does not provide specific version numbers for any of these dependencies. |
| Experiment Setup | Yes | We use the Adam optimizer in both the pretraining and fine-tuning phases, with β1 = 0.9, β2 = 0.999. The only difference lies in the pretraining process, where we set the learning rate to 0.0001 and perform batch size 16 pretraining on 8 RTX 3090s. In the fine-tuning phase, we adopt a Layerwise Learning Rate Decay (LLRD) training method, which adjusts the learning rate layer by layer during training. We set the learning rate of the last layer s parameters to 0.001 and the learning rate of the previous layer s parameters to 0.95 times the learning rate of the next layer s parameters. |