Unsupervised Multi-Modal Medical Image Registration via Discriminator-Free Image-to-Image Translation
Authors: Zekang Chen, Jia Wei, Rui Li
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate four variants of our approach on the public Learn2Reg 2021 datasets [Hering et al., 2021]. The experimental results demonstrate that the proposed architecture achieves state-of-the-art performance. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, South China University of Technology, Guangzhou, China 2Golisano College of Computing and Information Sciences, Rochester Institute of Technology, Rochester, NY 14623, USA |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/heyblack C/DFMIR. |
| Open Datasets | Yes | We evaluated our proposed method on two public datasets. Both of them are obtained from MICCAI Learn2Reg 2021 challenge [Hering et al., 2021] |
| Dataset Splits | Yes | we randomly split the dataset into 10/2/4 pairs for train/validation/test, and central 90 slices containing organs in each scan are extracted for our experiments. ... we randomly divide the 30 pairs into 20/4/6 for train/validation/test, and on each volume, we extract middle 100 slices. |
| Hardware Specification | Yes | all the experiments were conducted on Ge Force RTX 2080 Ti. |
| Software Dependencies | No | Our networks are implemented in Py Torch. No specific version number for PyTorch or other software is provided. |
| Experiment Setup | Yes | The translation network T is a Resnet-based architecture with 9 residual blocks. Our encoder is defined as the first half of the translation network, and five layers of features in the encoder are extracted. The registration network adopts a U-net based architecture with skip connections from contracting path to expanding path [Ronneberger et al., 2015]. For the initialization of networks, we use the Xavier initialization method. Our networks are implemented in Py Torch and all the experiments were conducted on Ge Force RTX 2080 Ti. We use Adam optimizer to train our model for 300 epochs with parameters lr = 0.0002, β1 = 0.5 and β2 = 0.999. Linear learning rate decay is activated after 200 epochs. ... where we set λP = 0.25, λA = 1, λL = 0.25 and λG = 1 in our experiments. |