Appearance Prompt Vision Transformer for Connectome Reconstruction
Authors: Rui Sun, Naisong Luo, Yuwen Pan, Huayu Mai, Tianzhu Zhang, Zhiwei Xiong, Feng Wu
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on multiple challenging benchmarks demonstrate that our APVi T achieves consistent improvements with huge flexibility under the same postprocessing strategy. 4 Experiments |
| Researcher Affiliation | Academia | 1University of Science and Technology of China 2Institute of Artificial Intelligence, Hefei Comprehensive National Science Center 3Deep Space Exploration Lab |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described. |
| Open Datasets | Yes | Two commonly used neuron datasets, named CREMI [Funke et al., ] and AC3/AC4 [Arganda-Carreras et al., 2015], are used for the evaluation of our method. |
| Dataset Splits | Yes | Following the SNEMI3D challenge, We use the top 80 slices of AC4 as training set and the rest of AC4 as validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. It only states training parameters like batch size and optimizer. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | In our APVi T, the number of layers is {1, 2, 4, 2}. The volume size of the input is anisotropic (18, 160, 160), and the patch size is (1, 2, 2) at each stage. During training, our model is trained with batch size of 2, using the Adam optimizer with an initial learning rate of 0.0001 for 200,000 iterations. And we constrain the output at different resolutions for each stage with GT as an auxiliary loss where we set λdiv = 0.1. |