Stochastic Optimal Control for Diffusion Bridges in Function Spaces

Authors: Byoungwoo Park, Jungwon Choi, Sungbin Lim, Juho Lee

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments This section details the experimental setup and the application of the proposed Diffusion Bridges in Function Spaces (DBFS) for generating functional data. We interpret the data from a functional perspective, known as field representation [75, 79], where data are seen as a finite collection of function evaluations {Y[pi], pi}N i . Additional experimental details are provided in Appendix A.8.
Researcher Affiliation Collaboration Byoungwoo Park1, Jungwon Choi1, Sungbin Lim2,3 , Juho Lee1 KAIST1, Korea University2, LG AI Research3 {bw.park, jungwon.choi, juholee}@kaist.ac.kr, sungbin@korea.ac.kr
Pseudocode Yes Algorithm 1 Bridge Matching of DBHS
Open Source Code Yes Code is available at https://github.com/bw-park/DBFS.
Open Datasets Yes For the EMNIST and MNIST datasets, the initial distribution, π0, was set as the MNIST dataset, while for the terminal distribution, πT , we used the EMNIST dataset with the first five lowercase and uppercase characters, as outlined by [19].
Dataset Splits No The paper mentions generating synthetic data and specific test set creation for Physionet, but does not explicitly provide percentages or counts for training, validation, and test splits for all experiments, nor does it refer to standard predefined splits for all datasets used.
Hardware Specification Yes We use a single A6000 GPU for this experiment.
Software Dependencies No The paper mentions 'Adam optimizer' and implies PyTorch use through cited repositories (e.g., 'idbm-pytorch'), but it does not specify version numbers for Python, PyTorch, or other software libraries used for the experiments.
Experiment Setup Yes Training was conducted using the Adam optimizer with a learning rate of 1e 3. The network was trained with a batch size of 24 for a total of 1000 iterations. We set σ = 0.2 in (1) for this experiment and set 100 discretization steps.