Learning Diffusion Bridges on Constrained Domains
Authors: Xingchao Liu, Lemeng Wu, Mao Ye, qiang liu
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our algorithms for generating mixed-typed tabular data, grid-valued point clouds, categorical semantic segmentation maps, discrete CIFAR10 images. |
| Researcher Affiliation | Academia | Xingchao Liu, Lemeng Wu, Mao Ye, Qiang Liu Department of Computer Science University of Texas at Austin {xcliu, lmwu, my21, lqiang}@cs.utexas.edu |
| Pseudocode | Yes | Algorithm 1 Learning Diffusion Models on Constrained Domains (a Simple Example) and Algorithm 2 Learning Ω-Bridge Diffusion Models |
| Open Source Code | Yes | Code is available at https: //github.com/gnobitab/Constrained Diffusion Bridge. |
| Open Datasets | Yes | In this experiment, we use the Adult Income dataset (Kohavi, 1996)... We test our method on Shape Net (Chang et al., 2015) chair models... In City Scapes (Cordts et al., 2016)... We compare with the variants of a state-of-the-art discrete diffusion model, D3PM (Austin et al., 2021). |
| Dataset Splits | No | The paper mentions using a 'training set and test set' for City Scapes data, referencing another paper for the split, but does not explicitly provide details for a validation split or its percentages/counts for any dataset. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory specifications) used to run the experiments. |
| Software Dependencies | No | The paper mentions software like 'Adam optimizer' and references other implementations but does not specify version numbers for any key software dependencies used in their own methodology. |
| Experiment Setup | Yes | By default, we set the initialization Z0 = 0 and the optimizer Adam." (Algorithm Overview) and "The network is optimized with Adam optimizer with a learning rate of 0.0002. The model is trained for 500 epochs." (Section B.3) |