Universal Approximation Using Well-Conditioned Normalizing Flows

Authors: Holden Lee, Chirag Pabbaraju, Anish Prasad Sevekari, Andrej Risteski

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we show that any log-concave distribution can be approximated using well-conditioned affine-coupling flows. In terms of proof techniques, we uncover and leverage deep connections between affine coupling architectures, underdamped Langevin dynamics (a stochastic differential equation often used to sample from Gibbs measures) and Hénon maps (a structured dynamical system that appears in the study of symplectic diffeomorphisms).
Researcher Affiliation Academia Holden Lee Mathematics Department Duke University Durham, NC 27708 holden.lee@duke.edu Chirag Pabbaraju Computer Science Department Stanford University Stanford, CA 94305 cpabbara@cs.stanford.edu Anish Sevekari Department of Mathematical Sciences Carnegie Mellon University Pittsburgh, PA 15213 asevekar@andrew.cmu.edu Andrej Risteski Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213 aristesk@andrew.cmu.edu
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the described methodology.
Open Datasets No This paper is theoretical and does not involve training models on datasets. Therefore, no information about public dataset availability is provided.
Dataset Splits No This paper is theoretical and does not involve empirical experiments with datasets. Therefore, no dataset split information for training, validation, or testing is provided.
Hardware Specification No This paper is theoretical and does not describe any computational experiments that would require specific hardware. Thus, no hardware specifications are mentioned.
Software Dependencies No This paper is theoretical and does not describe any computational experiments that would require specific software dependencies with version numbers. Thus, no software dependencies are mentioned.
Experiment Setup No This paper is theoretical and does not describe any empirical experimental setup, including hyperparameters or system-level training settings.