Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices

Authors: Santosh Vempala, Andre Wibisono

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove a convergence guarantee in Kullback Leibler (KL) divergence assuming satisfies log-Sobolev inequality and f has bounded Hessian. Notably, we do not assume convexity or bounds on higher derivatives. We also prove convergence guarantees in Rényi divergence of order q > 1 assuming the limit of ULA satisfies either log-Sobolev or Poincaré inequality.
Researcher Affiliation Academia Santosh S. Vempala College of Computing Georgia Institute of Technology Atlanta, GA 30332 vempala@gatech.edu Andre Wibisono College of Computing Georgia Institute of Technology Atlanta, GA 30332 wibisono@gatech.edu
Pseudocode No The paper describes the ULA algorithm by formula (xk+1 = xk rf(xk) + 2 zk) but does not present it in a formal pseudocode block or algorithm listing.
Open Source Code No The paper does not contain any statements about making source code publicly available or provide links to code repositories.
Open Datasets No The paper is theoretical and does not conduct experiments involving datasets for training. It discusses probability distributions conceptually but does not use them as empirical datasets.
Dataset Splits No The paper is theoretical and does not involve empirical validation on datasets, therefore no validation splits are mentioned.
Hardware Specification No The paper is theoretical and does not describe any experimental setup that would require hardware specifications.
Software Dependencies No The paper is theoretical and does not describe any experimental setup that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on mathematical proofs and analysis rather than empirical experiments, thus it does not include details on an experimental setup or hyperparameters.