Bi-level Probabilistic Feature Learning for Deformable Image Registration

Authors: Risheng Liu, Zi Li, Yuxi Zhang, Xin Fan, Zhongxuan Luo

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments of image-to-atlas and image-to-image deformable registration on 3D brain MR datasets demonstrate that we achieve state-of-the-art performance in terms of accuracy, efficiency, and robustness.
Researcher Affiliation Academia Risheng Liu1,2 , Zi Li1,2 , Yuxi Zhang1,2 , Xin Fan1,2, and Zhongxuan Luo2,3,4 1International School of Information Science & Engineering, Dalian University of Technology 2Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province 3School of Software Technology, Dalian University of Technology 4Institute of Artificial Intelligence, Guilin University of Electronic Technology {rsliu, xin.fan, zxluo}@dlut.edu.cn, alisonbrielee@gmail.com, yuxizhang@mail.dlut.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the described methodology.
Open Datasets Yes 368 T1 weighted MR volumes from three publicly available datasets: ADNI [Mueller et al., 2005], ABIDE [Di Martino et al., 2014] and OASIS [Marcus et al., 2007] are selected and split into 281, 17, and 70 for training, validation, and testing, respectively.
Dataset Splits Yes 368 T1 weighted MR volumes from three publicly available datasets: ADNI [Mueller et al., 2005], ABIDE [Di Martino et al., 2014] and OASIS [Marcus et al., 2007] are selected and split into 281, 17, and 70 for training, validation, and testing, respectively.
Hardware Specification Yes We run Elastix, ANTs (Sy N), and Nifty Reg on a PC with i7-8700 (@3.20GHz, 32G RAM), while learning-based methods on NVIDIA TITAN XP.
Software Dependencies No The paper mentions "TensorFlow package" but does not specify its version number, nor other software dependencies with specific versions.
Experiment Setup Yes During training, we use Adam optimizer [Kingma and Ba, 2015] with a learning rate of 1e 4. We set the batch size as 1.