Fully Convolutional Network for Consistent Voxel-Wise Correspondence
Authors: Yungeng Zhang, Yuru Pei, Yuke Guo, Gengyu Ma, Tianmin Xu, Hongbin Zha12935-12942
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on both synthetic and clinically captured volumetric cone-beam CT (CBCT) images show that the proposed framework is effective and competitive against state-of-the-art deformable registration techniques. |
| Researcher Affiliation | Collaboration | 1Key Laboratory of Machine Perception (MOE), Department of Machine Intelligence, Peking University, Beijing, China 2 Luoyang Institute of Science and Technology, Luoyang, China 3 u Sens Inc., San Jose, USA 4 School of Stomatology, Peking University, Beijing, China |
| Pseudocode | No | The paper describes the proposed method in text and diagrams, but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing open-source code or a link to a code repository for their method. |
| Open Datasets | No | The paper states: 'The training dataset consists of 400 clinically captured CBCT images from orthodontic patients...' and 'we generate a toy dataset with the ground-truth DVFs using synthetic data...'. However, no specific link, DOI, or formal citation to a publicly available version of these datasets is provided. |
| Dataset Splits | No | The paper states, 'The training dataset consists of 400 clinically captured CBCT images...' and 'For testing, we collect a toy dataset with 20 synthetic images and a real dataset with 20 clinically captured images.' It does not explicitly define a separate validation dataset split. |
| Hardware Specification | Yes | The framework is implemented using the open-source Py Torch implementation of convolutional neural networks on an NVIDIA GTX TITAN X GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch implementation' but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We train the network using the ADAM optimizer with a learning rate of 1e-4 and momentums of 0.5 and 0.999. The mini-batch contains three volumes. |