DeepBranchTracer: A Generally-Applicable Approach to Curvilinear Structure Reconstruction Using Multi-Feature Learning
Authors: Chao Liu, Ting Zhao, Nenggan Zheng
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We extensively evaluated our model on both 2D and 3D datasets, demonstrating its superior performance over existing segmentation and reconstruction methods in terms of accuracy and continuity. |
| Researcher Affiliation | Academia | Chao Liu1,2, Ting Zhao3, Nenggan Zheng1, 2, 4, 5 1Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou, Zhejiang, China 2College of Computer Science and Techology, Zhejiang University, Hangzhou, Zhejiang, China 3Independent Researcher 4State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, Zhejiang, China 5CCAI by MOE and Zhejiang Provincial Government(ZJU), Hangzhou, Zhejiang, China supermeliu@zju.edu.cn, tingzhao@gmail.com, zng@cs.zju.edu.cn |
| Pseudocode | Yes | The detail of the tracing strategy is laid out in Algorithm 1. |
| Open Source Code | Yes | The code of this work is available at https://github.com/CSDLLab/Deep Branch Tracer. |
| Open Datasets | Yes | We conducted experiments on five popular datasets contain 2D and 3D images, consisting of Massachusetts Roads Dataset (Mnih 2013), DRIVE (Staal et al. 2004) and CHASE DB1 (Carballal et al. 2018) retina datasets, 3D DIADEM-CCF (Brown et al. 2011) and f MOST-VTA (Li et al. 2010; Gong et al. 2013) neuron datasets. |
| Dataset Splits | No | For the f MOST-VTA dataset, the paper states: 'Among them, 20 were used for training and 5 for testing.' It does not explicitly specify a validation split or details for other datasets. |
| Hardware Specification | Yes | The hardware configuration has 64 GB of memory, and the CPU is an Intel Xeon E5-2680 v3 and four NVIDIA Ge Force RTX2080ti GPUs. |
| Software Dependencies | Yes | The network architecture is implemented in Py Torch 1.7.0 and Mind Spore 1.7.0 (Huawei Technologies Co. 2022). |
| Experiment Setup | Yes | During the training process, all the 2D images are split into small patches of size 64 64 with the batch size of 64, and all the 3D images are split into small patches of size 16 64 64 with the batch size of 16. The hyper-parameters in Eq. (11) and Eq. (14) are set to λd = 1, λr = 100, λc = 1, and λb = 1. The weights of binary cross entropy loss function Eq. (12) and Eq. (13) are set to wc = wb = 0.9. The thresholds in the MFT strategy are set to Tc = 0.5 and Tb = 0.5. |