Solving Inverse Problems via Diffusion Optimal Control

Authors: Henry Li, Marcus Pereira

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We then evaluate our method against a selection of neural inverse problem solvers, and establish a new baseline in image reconstruction with inverse problems1.
Researcher Affiliation Collaboration Henry Li Yale University henry.li@yale.edu Marcus Pereira Bosch Center for Artificial Intelligence marcus.pereira@us.bosch.com
Pseudocode Yes Algorithm 1 Diffusion Optimal Control Input: λ, T, y, x T Initialize ut, kt, Kt as 0 for t = 1 . . . T, {x t}T t=0 as uncontrolled dynamics for iter = 1 to num_iters do Vx, Vxx x0 log p(y|x0), 2 x0 log p(y|x0) Initialize derivatives of V (xt, t) for t = 1 to T do Compute kt, Kt, Vx, Vxx See Eqs. (13), (14) end for for t = T to 1 do xt 1 h(xt, λkt + Kt(xt x t)) Update xt 1 with new ut x t xt end for end for
Open Source Code No The authors will release code upon acceptance.
Open Datasets Yes We validate our results on the high resolution human face dataset FFHQ 256 256 Karras et al. [2019].
Dataset Splits Yes To fairly compare between all models, all methods use the model weights from Chung et al. [2023a], which are trained on 49K FFHQ images, with 1K images left as a held-out set for evaluation.
Hardware Specification Yes Experiments can be run on any GPU A4000 or later.
Software Dependencies No The paper mentions 'Adam optimizer Kingma and Ba [2014]' and 'standard automatic differentiation libraries (e.g. torch.func.vjp)' but does not provide specific version numbers for these software dependencies, only the underlying algorithm for Adam.
Experiment Setup Yes Further hyperparameters can be found in Table 2. For the classifier p(y|x) in MNIST class-guided classification, we use a simple convolutional neural network with two convolutional layers and two MLP layers, trained on the entire MNIST dataset.