Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Flow matching achieves almost minimax optimal convergence
Authors: Kenji Fukumizu, Taiji Suzuki, Noboru Isobe, Kazusato Oko, Masanori Koyama
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper discusses the convergence properties of FM for large sample size under the p-Wasserstein distance. We establish that FM can achieve an almost minimax optimal convergence rate for 1 p 2, presenting the first theoretical evidence that FM can reach convergence rates comparable to those of diffusion models. Our analysis extends existing frameworks by examining a broader class of mean and variance functions for the vector fields and identifies specific conditions necessary to attain almost optimal rates. 4 THEORETICAL DETAILS |
| Researcher Affiliation | Collaboration | Kenji Fukumizu The Institute of Statistical Mathematics/Preferred Networks Tokyo, Japan EMAIL Taiji Suzuki University of Tokyo/RIKEN AIP Tokyo, Japan EMAIL Masanori Koyama Preferred Networks/University of Tokyo Tokyo, Japan EMAIL |
| Pseudocode | No | The paper contains detailed mathematical proofs and theoretical analyses but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code, nor does it provide links to code repositories or mention code in supplementary materials. |
| Open Datasets | No | The paper is theoretical and analyzes convergence rates for a generic setting where 'n training data {x(i)}n i=1 is i.i.d. samples from P[1]'. It does not refer to any specific publicly available datasets or provide access information for any data. |
| Dataset Splits | No | The paper focuses on theoretical analysis and does not conduct experiments requiring dataset splits. Therefore, no information about training, testing, or validation splits is provided. |
| Hardware Specification | No | The paper is a theoretical work and does not describe any experimental setups or computational hardware used. |
| Software Dependencies | No | As a theoretical paper, there is no mention of specific software dependencies or their version numbers. |
| Experiment Setup | No | The paper is purely theoretical, focusing on mathematical convergence rates, and therefore does not include details on experimental setup, hyperparameters, or training configurations. |