Matching Normalizing Flows and Probability Paths on Manifolds

Authors: Heli Ben-Hamu, Samuel Cohen, Joey Bose, Brandon Amos, Maximillian Nickel, Aditya Grover, Ricky T. Q. Chen, Yaron Lipman

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we show that CNFs learned by minimizing PPD achieve state-of-the-art results in likelihoods and sample quality on existing low-dimensional manifold benchmarks, and is the first example of a generative model to scale to moderately high dimensional manifolds. 5. Experiments We have tested the CNFM framework with the PPD for training CNFs on low and moderately high dimensional manifold data.
Researcher Affiliation Collaboration 1Weizmann Institute of Science 2Meta AI Research 3Centre for Artificial Intelligence, University College London.
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly state that open-source code for the described methodology is provided, nor does it include a link to a code repository.
Open Datasets Yes In the first experiment we worked with samples drawn from standard toy distributions on the 2D Euclidean plane and sphere. In this experiment we considered the Earth and Climate dataset curated in (Mathieu & Nickel, 2020). We created datasets for k = 2, 3, 4 with 45K train samples and 5K test samples. Using CNFM as-is on the MNIST dataset (d = 784) with a standard batch size of 128 results in samples shown in inset.
Dataset Splits No The paper mentions '45K train samples and 5K test samples' but does not specify a separate validation split or explicit training/validation/test percentages/counts.
Hardware Specification No The paper does not provide specific hardware details (such as GPU or CPU models, or memory specifications) used for running its experiments.
Software Dependencies No The paper mentions using 'Adam optimizer' and 'MLP' but does not specify version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes For the R2 datasets we used a 3 layer MLP with hidden dimension 256. We trained with Adam optimizer with learning rate 1e 4, batch size 1000, σ1 = 0.01 and ℓ= 1. The searched parameters across learning rates are {1e 3, 5e 4, 1e 4} and σ1 {0.005, 0.01, 0.05}.