Reflected Flow Matching
Authors: Tianyu Xie, Yu Zhu, Longlin Yu, Tong Yang, Ziheng Cheng, Shiyue Zhang, Xiangyu Zhang, Cheng Zhang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We first test the effectiveness of RFM for modeling probability distributions on low-dimensional constrained domains, including hypercube, simplex, half annulus, and cup. [...] Table 1 reports the approximation accuracies measured by the Kullback Leibler (KL) divergences to the ground truth obtained by different methods. |
| Researcher Affiliation | Collaboration | 1School of Mathematical Sciences, Peking University, Beijing, China 2Institute of Automation, Chinese Academy of Sciences, Beijing, China 3Beijing Academy of Artificial Intelligence, Beijing, China 4School of Computer Science, Fudan University, Shanghai, China 5Megvii Technology Inc., Beijing, China 6Center for Statistical Science, Peking University, Beijing, China. |
| Pseudocode | Yes | Algorithm 1 Sampling from Reflected CNFs |
| Open Source Code | Yes | Our code is available at https://github.com/tyuxie/RFM. |
| Open Datasets | Yes | We then explore the performances of RFM for the unconditional image generation task on CIFAR-10 (32×32) and the conditional generation task on Image Net (64×64). |
| Dataset Splits | Yes | We then explore the performances of RFM for the unconditional image generation task on CIFAR-10 (32×32) and the conditional generation task on Image Net (64×64). |
| Hardware Specification | Yes | It costs 1.5 day on 8 Nvidia 2080 Ti GPUs to train reflected CNFs with RFM on CIFAR-10. [...] It costs 14 days on 32 Nvidia A100 GPUs to train FM and RFM on Image Net (64×64). |
| Software Dependencies | No | All models are implemented in Py Torch (Paszke et al., 2019) and optimized with the Adam (Kingma & Ba, 2015) optimizer. While software names are mentioned, specific version numbers for PyTorch, Adam, or other libraries are not provided. |
| Experiment Setup | Yes | On CIFAR-10, the velocity model of RFM is optimized with Adam (Kingma & Ba, 2015) and a constant learning rate of 0.0002 after a warmup phase of 5000 training steps; on Image Net, the velocity model of RFM is optimized with AdamW (Loshchilov & Hutter, 2018) and a constant learning rate of 0.0001 after a warm-up phase of 5000 training steps. The total number of training steps is 800,000 on CIFAR-10 and 540,000 on Image Net. The batch size is set to 128 on CIFAR-10 and 2048 on Image Net. [...] Table 5. Hyper-parameters used for training each model. |