Markovian Flow Matching: Accelerating MCMC with Continuous Normalizing Flows

Authors: Alberto Cabezas, Louis Sharrock, Christopher Nemeth

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the performance of MFM (Algorithm 1) on two synthetic and two real data examples. Our method is benchmarked against four relevant methods.
Researcher Affiliation Academia Alberto Cabezas Department of Mathematics and Statistics Lancaster University, UK a.cabezasgonzalez@lancaster.ac.uk Louis Sharrock Department of Mathematics and Statistics Lancaster University, UK l.sharrock@lancaster.ac.uk Christopher Nemeth Department of Mathematics and Statistics Lancaster University, UK c.nemeth@lancaster.ac.uk
Pseudocode Yes Algorithm 1 Markovian Flow Matching
Open Source Code Yes Code to reproduce the experiments is provided at https://github.com/albcab/mfm.
Open Datasets Yes Our first real-world example considers the stochastic Allen Cahn model [7], used as a benchmark in [24], and described in Appendix C.5. One such model is the log-Gaussian Cox process (LGCP) introduced in [53], which is used to model the locations of 126 Scots pine saplings in a natural forest in Finland. See Appendix C.6 for full details.
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits with percentages or sample counts. It refers to samples from the Markov chain for training, but not fixed dataset splits.
Hardware Specification Yes All experiments are run on an NVIDIA V100 GPU with 32GB of memory.
Software Dependencies No Code for the numerical experiments is written in Python with array computations handled by JAX [11]. The paper mentions JAX but does not specify a version number for JAX or Python, nor does it list other software dependencies with specific version numbers.
Experiment Setup Yes For this experiment, all methods use N = 128 parallel chains for training and 128 hidden dimensions for all neural networks. Methods with a MALA kernel use a step size of 0.2, and methods with splines use 4 coupling layers with 8 bins and range limited to [ 16, 16].