Diffusion Adversarial Representation Learning for Self-supervised Vessel Segmentation
Authors: Boah Kim, Yujin Oh, Jong Chul Ye
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on various datasets show that our method significantly outperforms existing unsupervised and self-supervised vessel segmentation methods. |
| Researcher Affiliation | Academia | Boah Kim , Yujin Oh , Jong Chul Ye Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea {boahkim,yujin.oh,jong.ye}@kaist.ac.kr |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Source code is available at https://github.com/bispl-kaist/DARL. |
| Open Datasets | Yes | We train our model with the publicly available unlabeled X-ray coronary angiography disease (XCAD) dataset obtained during stent placement surgery and generated synthetic fractal masks (Ma et al., 2021). ...Furthermore, we evaluate cross-organ generalization capability on retinal imaging datasets; DRIVE (Staal et al., 2004) and STARE (Hoover & Goldbaum, 2003). |
| Dataset Splits | Yes | Additional 126 angiography images, along with the ground-truth vessel masks annotated by experienced radiologists, are divided into validation and test sets by 10% and 90%, respectively. |
| Hardware Specification | Yes | Our model is optimized by using the Adam algorithm (Kingma & Ba, 2014) with a learning rate of 5 · 10−6 on a single GPU card of Nvidia Quadro RTX 6000. |
| Software Dependencies | No | All the implementations are done using the library of PyTorch (Paszke et al., 2019) in Python. The paper mentions PyTorch and Python but does not provide specific version numbers for these or other libraries. |
| Experiment Setup | Yes | To train the model, we set the number of time steps as T = 2000 with the linearly scheduled noise levels from 10−6 to 10−2. Within this range, we sample the noisy angiograms by setting Ta to 200. Also, we set the hyperparameters of loss function as α = 0.2 and β = 5. Our model is optimized by using the Adam algorithm (Kingma & Ba, 2014) with a learning rate of 5 · 10−6 on a single GPU card of Nvidia Quadro RTX 6000. We train the model for 150 epochs, and the model in the epoch with the best performance on the validation set is used for test data. |