Sparse Bayesian Deep Learning for Cross Domain Medical Image Reconstruction

Authors: Jiaxin Huang, Qi Wu, Yazhou Ren, Fan Yang, Aodi Yang, Qianqian Yang, Xiaorong Pu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental When evaluated on medical image reconstruction tasks, our proposed approach demonstrates impressive performance across various previously unseen domains. Experiments Cross Domain LDCT Image Denoising Dataset... Ablation Study To evaluate the effectiveness of our dynamic optimization approach, we benchmark it against five baseline Bayesian inference techniques.
Researcher Affiliation Academia 1School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China 2Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen, China
Pseudocode Yes Algorithm 1: Dynamic Prior Optimization with SGLD
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes For this application, we train our proposed method on a phantom dataset (Phantom)... The first testing dataset AAPM... (Mc Collough 2016)... Low-Dose Parallel Beam (Lo Do Pa B)-CT dataset (Lo Do Pa B) (Leuschner et al. 2021) and the Low Dose CT Image and Projection Data (LDCT-and-Projection-data) (Moen et al. 2021)... MRNet dataset (Bien et al. 2018) and the ADNI dataset (Jack Jr et al. 2008).
Dataset Splits No The paper specifies training and testing datasets and splits, but does not explicitly mention or detail a separate validation dataset split.
Hardware Specification No The paper mentions 'the server can afford' the memory required and discusses computational cost in terms of MACs and Memory, but it does not specify any particular hardware components like GPU models, CPU models, or memory amounts used for the experiments.
Software Dependencies No The paper mentions 'Py Torch-Op Counter' and 'Pytorch-Memory-Utils' for calculating metrics, but it does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes The convolutional layers used in the encoder are of the kernel size 3, the padding size 1, and the stride 1... In all experiments, we randomly crop patches with a batch size of 16 for training. Hyperparameters are set to λalign=10, λrec=0.013 and λper=10. Number of SGLD iterations Tsgld for Phantom data and number of adaptation steps Tadapt is set to 8. The learning rate was initially set as 0.001 and reduced to 0.0001 when the training errors held steady.