Constant Acceleration Flow
Authors: Dogyun Park, Sojin Lee, Sihyeon Kim, Taehoon Lee, Youngjoon Hong, Hyunwoo J. Kim
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our comprehensive studies on toy datasets, CIFAR-10, and Image Net 64 64 demonstrate that CAF outperforms state-of-the-art baselines for one-step generation. We also show that CAF dramatically improves few-step coupling preservation and inversion over Rectified flow. |
| Researcher Affiliation | Academia | Dogyun Park Korea University gg933@korea.ac.kr Sojin Lee Korea University sojin_lee@korea.ac.kr Sihyeon Kim Korea University sh_bs15@korea.ac.kr Taehoon Lee Korea University 98hoon@korea.ac.kr Youngjoon Hong KAIST hongyj@kaist.ac.kr Hyunwoo J. Kim Korea University hyunwoojkim@korea.ac.kr |
| Pseudocode | Yes | Algorithm 1 Training process of Constant Acceleration Flow |
| Open Source Code | Yes | Code is available at https://github.com/mlvlab/CAF. |
| Open Datasets | Yes | Our comprehensive studies on toy datasets, CIFAR-10, and Image Net 64 64 demonstrate that CAF outperforms state-of-the-art baselines for one-step generation. |
| Dataset Splits | No | To further validate the effectiveness of our approach, we train CAF on real-world image datasets, specifically CIFAR-10 at 32 32 resolution and Image Net at 64 64 resolution. |
| Hardware Specification | Yes | The total training takes about 21 days with 8 NVIDIA A100 GPUs for Image Net, and takes 10 days 8 NVIDIA RTX3090 GPUs for CIFAR-10. |
| Software Dependencies | No | For all experiments, we use Adam W [53] optimizer with a learning rate of 0.0001 and apply an Exponential Moving Average (EMA) with a 0.999 decay rate. |
| Experiment Setup | Yes | For all experiments, we use Adam W [53] optimizer with a learning rate of 0.0001 and apply an Exponential Moving Average (EMA) with a 0.999 decay rate. For adversarial training, we employ adversarial loss Lgan using real data x1,real from [24]: ... We set h = 1.5 and d as LPIPS-Huber loss [43] for all real-data experiments. |