Deep Generative Learning via Schrödinger Bridge

Authors: Gefei Wang, Yuling Jiao, Qian Xu, Yang Wang, Can Yang

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on multimodal synthetic data and benchmark data support our theoretical findings and indicate that the generative model via Schr odinger Bridge is comparable with state-ofthe-art GANs, suggesting a new formulation of generative learning.
Researcher Affiliation Collaboration 1Department of Mathematics, The Hong Kong University of Science and Technology, Hong Kong, China 2School of Mathematics and Statistics, Wuhan University, Wuhan, China 3AI Group, We Bank Co., Ltd., Shenzhen, China 4Guangdong-Hong Kong Macao Joint Laboratory for Data-Driven Fluid Mechanics and Engineering Applications, The Hong Kong University of Science and Technology, Hong Kong, China.
Pseudocode Yes Algorithm 1 Sampling and Algorithm 2 Inpainting with stage 2
Open Source Code Yes The code for reproducing all our experiments is available at https://github.com/YangLabHKUST/DGLSB.
Open Datasets Yes We use two benchmark datasets including CIFAR-10 (Krizhevsky et al., 2009) and Celeb A (Liu et al., 2015).
Dataset Splits No The paper normalizes data and uses 50,000 samples to estimate the mean for data centering, but it does not specify train/validation/test splits, percentages, or sample counts for the datasets used.
Hardware Specification No The paper mentions that 'The computational task for this work was partially performed using the X-GPU cluster supported by the RGC Collaborative Research Fund: C6021-19EF.', but it does not provide specific details on CPU models, GPU models, memory, or other hardware components used for the experiments.
Software Dependencies No The paper does not specify any software dependencies or their version numbers, such as programming languages, libraries, or frameworks used for implementation.
Experiment Setup Yes For the noise level σ, we set σ = 1.0 in this paper for generative tasks including both 2D example and CIFAR-10... For larger images like Celeb A, as the dimensionality of samples is higher, we increase the noise level σ to 2.0... The numbers of grids are chosen as N1 = N2 = 1, 000 for stage 1 and stage 2. We use sample size N3 = 1 to estimate the drift term in stage 1 for both 2D toy examples and real images.