Stochastic Planner-Actor-Critic for Unsupervised Deformable Image Registration

Authors: Ziwei Luo, Jing Hu, Xin Wang, Shu Hu, Bin Kong, Youbing Yin, Qi Song, Xi Wu, Siwei Lyu1917-1925

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on several 2D and 3D medical image datasets, some of which contain large deformations. Our empirical results highlight that our work achieves consistent, significant gains and outperforms state-of-the-art methods.
Researcher Affiliation Collaboration 1 Chengdu University of Information Technology, China 2 Keya Medical, Seattle, USA 3 University at Buffalo, SUNY, USA
Pseudocode Yes Algorithm 1: Stochastic Planner-Actor-Critic
Open Source Code No The paper does not provide any explicit statement or link for the open-source code of the described methodology.
Open Datasets Yes MNIST (Le Cun et al. 1998) is regarded as a standard sanity check for the proposed registration method. The 2D brain MRI training dataset consists of 2302 pre-processed 2D scans from ADNI (Mueller et al. 2005), ABIDE (Di Martino et al. 2014) and ADHD (Bellec et al. 2017). For 3D registration, we use Liver Tumor Segmentation (Li TS) (Bilic et al. 2019) challenge data for training, which contains 131 CT scans with the segmentation ground truth manually annotated by experts.
Dataset Splits No The paper uses different datasets for training (ADNI, ABIDE, ADHD, Li TS) and evaluation/testing (LPBA, SLIVER, LSPIG) but does not specify explicit training/validation/test splits (e.g., percentages or counts from a single dataset) needed to reproduce data partitioning.
Hardware Specification No The paper does not explicitly state the specific hardware used for running experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not provide specific software dependencies or their version numbers (e.g., Python, PyTorch, TensorFlow versions) required for replication.
Experiment Setup No While the paper describes the overall framework and loss functions, it does not explicitly provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings.