Adaptive Convolutional Dictionary Network for CT Metal Artifact Reduction
Authors: Hong Wang, Yuexiang Li, Deyu Meng, Yefeng Zheng
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments executed on synthetic and clinical datasets show the superiority of our ACDNet in terms of effectiveness and model generalization. |
| Researcher Affiliation | Collaboration | 1Tencent Jarvis Lab, Shenzhen, China 2Xi an Jiaotong University, Xi an, China 3Peng Cheng Laboratory, Shenzhen, China |
| Pseudocode | No | The paper describes the optimization algorithm and network unfolding process in detail, including equations, but it does not present this information in a formal pseudocode block or algorithm box. |
| Open Source Code | Yes | Code and supplementary material are available at https://github.com/hongwang01/ACDNet. |
| Open Datasets | Yes | we randomly choose 1,200 clean CT images from the public Deep Lesion dataset [Yan et al., 2018] and collect 100 metal masks from [Zhang and Yu, 2018] to synthesize the paired clean/metal-corrupted CT images. Specifically, 90 metals together with 1000 clean CT images for training and 10 ones together with the remaining 200 clean CT images for testing. ... A public clinical dataset, i.e., CLINICmetal [Liu et al., 2021], which contains 14 metal-corrupted volumes with pixel-wise annotations of multiple bone structures (i.e., sacrum, left hip, right hip, and lumbar spine), is used for evaluation. |
| Dataset Splits | No | The paper specifies training and testing splits by count ('1000 clean CT images for training' and '200 clean CT images for testing') but does not explicitly define a separate validation dataset split for model tuning or early stopping. |
| Hardware Specification | Yes | The framework is trained on an NVIDIA Tesla V100-SMX2 GPU with a batch size of 32. |
| Software Dependencies | No | The paper states, 'ACDNet is optimized through Adam optimizer based on Py Torch.' While PyTorch is mentioned, no specific version number is provided, and no other software dependencies with version numbers are listed. |
| Experiment Setup | Yes | The initial learning rate is 2 10 4 and divided by 2 at epochs [50, 100, 150, 200]. The total number of epochs is 300. The size of input image patch is 64 64 pixels and it is randomly flipped horizontally and vertically. |