Diffusion Schrödinger Bridge Matching
Authors: Yuyang Shi, Valentin De Bortoli, Andrew Campbell, Arnaud Doucet
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the performance of DSBM on a variety of problems. |
| Researcher Affiliation | Academia | University of Oxford Valentin De Bortoli ENS ULM Andrew Campbell University of Oxford Arnaud Doucet University of Oxford |
| Pseudocode | Yes | Algorithm 1 Diffusion Schrödinger Bridge Matching |
| Open Source Code | Yes | 2Code can be found at https://github.com/yuyang-shi/dsbm-pytorch. |
| Open Datasets | Yes | MNIST, EMNIST transfer. We test our method for domain transfer between MNIST digits and EMNIST letters as in De Bortoli et al. (2021). [...] We use the dataset in (Bischoff and Deck, 2023) |
| Dataset Splits | No | The paper discusses training parameters but does not explicitly provide specific training/validation/test dataset splits or sample counts for validation. |
| Hardware Specification | Yes | The experiments are run on computing clusters with a mixture of both CPU and GPU resources. [...] The experiments are performed using 2 GPUs and take approximately one day. [...] The training takes approximately 20 hours on a single RTX GPU. [...] DSBM-IMF ran for approximately 4 additional days using 4 V100 GPUs. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and 'Si LU activations' and implies 'PyTorch' through the GitHub repository name, but it does not specify version numbers for any software dependencies. |
| Experiment Setup | Yes | In all experiments, we use Brownian motion for the reference measure Q with corresponding Brownian bridge (4) and T = 1. We use the Adam optimizer with learning rate 10 4 and Si LU activations unless specified otherwise. [...] We use batch size 128 and 20 diffusion steps with uniform schedule at sampling time. Each outer iteration is trained for 10000 steps and we train for 20 outer iterations. |