Dual Diffusion Implicit Bridges for Image-to-Image Translation

Authors: Xuan Su, Jiaming Song, Chenlin Meng, Stefano Ermon

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, we apply DDIBs on synthetic and high-resolution image datasets, to demonstrate their utility in a wide variety of translation tasks and their inherent optimal transport properties. Experimentally, we first present synthetic experiments on two-dimensional datasets to corroborate DDIBs cycle-consistent and optimal transport properties. We then evaluate our method on a variety of image modalities, with qualitative and quantitative results
Researcher Affiliation Collaboration 1Stanford University 2NVIDIA 3CZ Biohub
Pseudocode Yes Algorithm 1 High-level Pseudo-code for DDIBs
Open Source Code Yes Code: https://github.com/suxuann/ddib/
Open Datasets Yes Quantitatively, we demonstrate that DDIBs deliver competitive results on paired domain tests. Such numerical evaluation is despite that DDIBs are formulated with a weaker setting: diffusion models are trained independently, on separate datasets. In comparison, methods such as Cycle GAN and Align Flow assume access to both datasets during training and jointly optimize for the translation loss. Paired Domain Translation As in similar works, we evaluate DDIBs on benchmark paired datasets (Zhu et al., 2017): Facades and Maps. Both are image segmentation tasks. In this experiment, we apply DDIBs to translation among Image Net classes. To this end, we leverage the pretrained diffusion models from Dhariwal & Nichol (2021).
Dataset Splits No The paper mentions using "Image Net validation sets" but does not provide specific details on the dataset splits (percentages, counts, or explicit splitting methodology) for reproduction.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers, such as libraries or frameworks.
Experiment Setup No The paper describes the general approach and models used but does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, number of epochs) or detailed training configurations.