An Artificial Agent for Robust Image Registration
Authors: Rui Liao, Shun Miao, Pierre de Tournemire, Sasa Grbic, Ali Kamen, Tommaso Mansi, Dorin Comaniciu
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate, on two 3D/3-D medical image registration examples with drastically different nature of challenges, that the artificial agent outperforms several state-of-art registration methods by a large margin in terms of both accuracy and robustness. |
| Researcher Affiliation | Industry | Technology Center, Medical Imaging Technologies Siemens Medical Solutions USA Princeton, NJ 08540 |
| Pseudocode | No | No pseudocode or clearly labeled algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide any specific links or statements regarding the open-sourcing of the code for the described methodology. A disclaimer states: 'This feature is based on research, and is not commercially available. Due to regulatory reasons its future availability cannot be guaranteed.' |
| Open Datasets | No | The paper uses custom medical image datasets: 'Abdominal spine CT and CBCT' and 'Cardiac CT and CBCT', which were processed by experts. No concrete access information (links, DOIs, or citations to public repositories) for these datasets is provided. |
| Dataset Splits | Yes | Cross-validations were furthermore performed by 5 different blind data-splits for both E1 and E2 (validation for one data-split took 4 days on a 24 core + Ge Force Titan X computer for data augmentation and training). For each data-split, there were 82 pairs for training and 5 pairs for testing for E1, and 92 pairs for training and 5 pairs for testing for E2. |
| Hardware Specification | Yes | validation for one data-split took 4 days on a 24 core + Ge Force Titan X computer for data augmentation and training. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., programming language versions, deep learning framework versions, or library versions) used for the experiments. |
| Experiment Setup | Yes | We used RMSprop update without momentum and a batch size of 32. The learning rate was 0.00006 with a decay of 0.7 every 10000 mini-batch based back-propagations. The network consists of 5 convolutional layers followed by 3 fully connected layers. The convolutional layers use 8, 32, 32, 128, 128 filters, all with 3x3x3 kernels. The first 2 convolutional layers are each followed by a 2x2x2 max-pooling layer. The 3 fully-connected layers have 512, 512, 64 activation neurons, and the output has 12 nodes corresponding to the 12 possible actions in A. Each layer is followed by a nonlinear rectified layer, and batch normalization is applied to each layer. |