Dilated FCN for Multi-Agent 2D/3D Medical Image Registration

Authors: Shun Miao, Sebastien Piat, Peter Fischer, Ahmet Tuysuzoglu, Philip Mewes, Tommaso Mansi, Rui Liao

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment and Result”, “Testing was first performed on 116 CBCT data sets via three-fold cross validation (77 used for training and 39 used for testing). The typical size of the CBCT data is 512 512 389 with a pixel spacing of 0.486 mm. On each data set, 10 pairs of X-ray images that are >60 apart (common practice for spine surgery) were randomly selected, and 2D/3D registration was performed on each pair, starting from a perturbation of the ground truth transformation within 20 mm translation and 10 rotation, leading to 1,160 test cases. Note that X-ray images in CBCT data have a relatively low SNR with a faint spine as shown in Fig. 1. Experiment results are summarized in Table 2.
Researcher Affiliation Industry 1Siemens Healthineers, Medical Imaging Technologies, Princeton, NJ, USA 2Siemens Healthineers, Forchheim, Germany
Pseudocode No The paper describes the proposed methods in detail and illustrates network architectures with figures, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described, nor does it explicitly state that the code is available.
Open Datasets No The training data was generated from 77 CBCT data sets... Since the number of CBCTs is limited, we also generated pairs of synthetic X-ray image and DRR from 160 CTs as additional training data... The paper describes the data generation process but does not provide any concrete access information (links, DOIs, repository names, or formal citations) for these datasets.
Dataset Splits No The paper mentions “three-fold cross validation (77 used for training and 39 used for testing)” and analysis “on validation data”, but does not explicitly provide specific percentages, sample counts, or a clear partitioning methodology for a dedicated validation dataset split for reproducibility.
Hardware Specification Yes Training was performed on a Nvidia Titan Xp GPU using py Torch.
Software Dependencies No The paper mentions “py Torch” as the software used for training, but does not specify a version number or other software dependencies with their versions.
Experiment Setup Yes Specifically, the action space contains 12 actions of positive and negative movements along the 6 generators of se(3): A = { λ1G1, λ1G1, . . . , λ6G6, λ6G6}, (5) where λi is the step size for the action along the generator Gi. ... we set λ1,2,3 to be 1 to get a step size of 1 mm in translation, and λ4,5,6 to be π/180 = 0.0174 to get a step size of 1 degree in rotation. ... we run the agent for a fixed number of steps (i.e., 50 in our experiments). ... The input of the network is an observation of the current state, which consists of an observed region of fixed size (i.e. 61 61 pixels with 1.5 1.5 mm pixel spacing in our experiment) ... we selected a confidence threshold (i.e., 0.67 in our experiment) such that the correct rate of selected actions is above 95%. To avoid the scenario that too few agents are selected for a given test image, if less than 10% of the agents have a confidence score above this threshold, the top 10% agents will then be selected.