Metric Flow Matching for Smooth Interpolations on the Data Manifold

Authors: Kacper Kapusniak, Peter Potaptchik, Teodora Reu, Leo Zhang, Alexander Tong, Michael Bronstein, Joey Bose, Francesco Di Giovanni

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test METRIC FLOW MATCHING on different tasks: artificial dynamic reconstruction and navigation through Li DAR surfaces 5.1; unpaired image translation between classes in images 5.2; reconstruction of cell dynamics. Further results and experimental details can be found in Appendices D, E and F. We observe that MFM outperforms the Euclidean baselines, particularly achieving SOTA on single-cell trajectory prediction.
Researcher Affiliation Collaboration 1University of Oxford, 2Mila, 3Université de Montréal, 4AITHYRA
Pseudocode Yes Algorithm 1 Pseudocode for training of geodesic interpolants. Algorithm 2 Pseudocode for METRIC FLOW MATCHING.
Open Source Code Yes Code is available at https://github.com/kksniak/metric-flow-matching
Open Datasets Yes We used the Animal Face dataset from Choi et al. [2020], adhering to the splitting predefined by dataset authors for train and validation sets, with validation treated as the test set. We utilized the Cite and Multi datasets from the Multimodal Single-cell Integration Challenge at Neur IPS 2022 Lance et al. [2022], preprocessed by Tong et al. [2023a].
Dataset Splits Yes We used a 90%/10% train/validation split. We used a 90%/10% train/validation split, excluding left-out marginals from both sets. Training samples served as source and target distributions and for calculating the metrics, while validation samples were used for early stopping.
Hardware Specification Yes The unpaired translation experiment on AFHQ was trained on a GPU cluster with NVIDIA A100 and V100 GPUs.
Software Dependencies No The paper mentions software like PyTorch or Adam optimizer, but does not provide specific version numbers for software dependencies.
Experiment Setup Yes In the unpaired translation experiments, we utilized the U-Net architecture setup from Dhariwal and Nichol [2021] for both φt,η(x0, x1) and vt,θ(xt). The exact hyperparameters are reported in Table 5. We used the Adam optimizer Kingma and Ba [2014] for both networks and applied early stopping only for φt,η(x0, x1) based on training loss. For this experiment, we enforce stronger bending by using ( hα(x))8, in the loss function Lg RBF(η) in (11).