Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Markovian Flow Matching: Accelerating MCMC with Continuous Normalizing Flows
Authors: Alberto Cabezas, Louis Sharrock, Christopher Nemeth
NeurIPS 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the performance of MFM (Algorithm 1) on two synthetic and two real data examples. Our method is benchmarked against four relevant methods. |
| Researcher Affiliation | Academia | Alberto Cabezas Department of Mathematics and Statistics Lancaster University, UK EMAIL Louis Sharrock Department of Mathematics and Statistics Lancaster University, UK EMAIL Christopher Nemeth Department of Mathematics and Statistics Lancaster University, UK EMAIL |
| Pseudocode | Yes | Algorithm 1 Markovian Flow Matching |
| Open Source Code | Yes | Code to reproduce the experiments is provided at https://github.com/albcab/mfm. |
| Open Datasets | Yes | Our first real-world example considers the stochastic Allen Cahn model [7], used as a benchmark in [24], and described in Appendix C.5. One such model is the log-Gaussian Cox process (LGCP) introduced in [53], which is used to model the locations of 126 Scots pine saplings in a natural forest in Finland. See Appendix C.6 for full details. |
| Dataset Splits | No | The paper does not explicitly provide training/test/validation dataset splits with percentages or sample counts. It refers to samples from the Markov chain for training, but not fixed dataset splits. |
| Hardware Specification | Yes | All experiments are run on an NVIDIA V100 GPU with 32GB of memory. |
| Software Dependencies | No | Code for the numerical experiments is written in Python with array computations handled by JAX [11]. The paper mentions JAX but does not specify a version number for JAX or Python, nor does it list other software dependencies with specific version numbers. |
| Experiment Setup | Yes | For this experiment, all methods use N = 128 parallel chains for training and 128 hidden dimensions for all neural networks. Methods with a MALA kernel use a step size of 0.2, and methods with splines use 4 coupling layers with 8 bins and range limited to [ 16, 16]. |