Generative Modeling with Phase Stochastic Bridge
Authors: Tianrong Chen, Jiatao Gu, Laurent Dinh, Evangelos Theodorou, Joshua M. Susskind, Shuangfei Zhai
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our model yields comparable results in image generation and notably outperforms baseline methods, particularly when faced with a limited Number of Function Evaluations. ... We achieve competitive results compared to DM approaches equipped with specifically designed fast sampling techniques on image datasets, particularly in small NFE settings. |
| Researcher Affiliation | Collaboration | 1Georgia Tech, 2Apple |
| Pseudocode | Yes | Algorithm 1 Training; Algorithm 2 Sampling |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | CIFAR-10 (Krizhevsky et al., 2009), AFHQv2 (Choi et al., 2020) and Image Net (Deng et al., 2009) |
| Dataset Splits | No | The paper references well-known datasets but does not explicitly provide the training/test/validation dataset splits (e.g., percentages, sample counts, or citations to specific predefined splits) required for reproduction. |
| Hardware Specification | Yes | We use 8 Nvidia A100 GPU for all experiments. |
| Software Dependencies | No | The paper mentions the Adam W optimizer but does not provide specific version numbers for software dependencies such as Python, PyTorch, or CUDA used for running the experiments. |
| Experiment Setup | Yes | We use Adam W(Loshchilov & Hutter, 2017) as our optimizer and Exponential Moving Averaging with the exponential decay rate of 0.9999. We use 8 Nvidia A100 GPU for all experiments. For further, training setup, please refer to Table.6. Table 6: Additional experimental details [includes] Training Iter Learning rate Batch Size network architecture |