Monte Carlo guided Denoising Diffusion models for Bayesian linear inverse problems.

Authors: Gabriel Cardoso, Yazid Janati el idrissi, Sylvain Le Corff, Eric Moulines

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental we provide numerical simulations showing that it outperforms competing baselines when dealing with ill-posed inverse problems in a Bayesian setting.
Researcher Affiliation Academia Gabriel Cardoso* Ecole polytechnique IHU Liryc, Yazid Janati* Ecole polytechnique, Sylvain Le Corff Sorbonne Université, Eric Moulines Ecole polytechnique
Pseudocode Yes Algorithm 1: MCGdiff (σ = 0)
Open Source Code Yes The code for the experiments is available at https://github.com/gabrielvc/mcg_diff.
Open Datasets Yes Image datasets. Figure 3 shows samples of MCGdiff in different datasets (Celeb, Churches, Bedroom and Flowers)...We use a downsampling ratio of 4 for the CIFAR-10 dataset, 8 for both Flowers and Cats datasets and 16 for the others. The dimension of the datasets are recalled in table 4." Table 4 lists: "CIFAR-10", "Flowers", "Cats", "Bedroom", "Church", "Celeba HQ"
Dataset Splits No The paper mentions training procedures and uses datasets but does not explicitly provide specific training/validation/test dataset splits (e.g., percentages, sample counts, or references to predefined splits).
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. It does not mention any hardware specifications.
Software Dependencies No The paper mentions software like "Adam algorithm" and "normalizing flow" and "automatic differentiation libraries" but does not specify any version numbers for these software components or libraries.
Experiment Setup Yes The κ paramater of MCGdiff is κ2 = 10 4. We use 20 steps of DDIM for the numerical examples and for all algorithms. The sequence of {βs}1000 s=1 as a linearly decreasing sequence between β1 = 0.2 and β1000 = 10 4. The training procedure for variational inference used Adam algorithm with a learning rate of 10 3 and 200 iterations with Nnf = 10.