Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

GSURE-Based Diffusion Model Training with Corrupted Data

Authors: Bahjat Kawar, Noam Elata, Tomer Michaeli, Michael Elad

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To empirically evaluate our technique, we apply it on a downsized grayscale version of Celeb A (Liu et al., 2015), a dataset of celebrity face images. We train a GSURE-Diffusion model on noisy images with randomly missing patches, and compare its generative output with an oracle model that trained on the full clean images.
Researcher Affiliation Academia Bahjat Kawar EMAIL Department of Computer Science Noam Elata EMAIL Department of Electrical and Computer Engineering Tomer Michaeli EMAIL Department of Electrical and Computer Engineering Michael Elad EMAIL Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel
Pseudocode No The paper describes mathematical formulations and theoretical propositions but does not include any clearly labeled pseudocode or algorithm blocks. It refers to 'pragmatic implementation details in Appendix D' but these are not formatted as pseudocode.
Open Source Code Yes 1Our code is available at https://github.com/bahjat-kawar/gsure-diffusion.
Open Datasets Yes To empirically evaluate GSURE-Diffusion, we apply it on 32 32-pixel grayscale face images from Celeb A (Liu et al., 2015). [...] We train on 24, 853 scanned slices from the fast MRI (Knoll et al., 2020; Zbontar et al., 2019) single-coil knee MRI dataset
Dataset Splits Yes We train a GSURE-Diffusion model on noisy images with randomly missing patches, and compare its generative output with an oracle model that trained on the full clean images. [...] We measure the generative performance using the Frรฉchet Inception Distance (FID) (Heusel et al., 2017) between 10000 generated images and the Celeb A validation set. [...] We train a diffusion model on the corrupted measurements. A separate oracle model is trained with the same hyperparameters (detailed in Appendix C) on the fully sampled data. [...] To evaluate the validity of our approach, we measure the mean squared error (MSE) of both models in denoising 1024 fully sampled MR images from the fast MRI (Knoll et al., 2020; Zbontar et al., 2019) validation set for different diffusion timesteps.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU or CPU models.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup No Training hyperparameters and more details are provided in Appendix C.