Lower Bounds on Metropolized Sampling Methods for Well-Conditioned Distributions
Authors: Yin Tat Lee, Ruoqi Shen, Kevin Tian
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We give lower bounds on the performance of two of the most popular sampling methods in practice, the Metropolis-adjusted Langevin algorithm (MALA) and multi-step Hamiltonian Monte Carlo (HMC) with a leapfrog integrator, when applied to well-conditioned distributions. Our main result is a nearly-tight lower bound of eΩ(κd) on the mixing time of MALA from an exponentially warm start, matching a line of algorithmic results [DCWY18, CDWY19, LST20a] up to logarithmic factors and answering an open question of [CLA+20]. |
| Researcher Affiliation | Collaboration | Yin Tat Lee University of Washington and Microsoft Research yintat@uw.edu Ruoqi Shen University of Washington shenr3@cs.washington.edu Kevin Tian Stanford University kjtian@stanford.edu |
| Pseudocode | No | The paper describes algorithms (MALA, HMC) using mathematical formulations and textual descriptions but does not provide structured pseudocode blocks or algorithm listings. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., repository link or explicit statement of code release) for the methodology described. |
| Open Datasets | No | This is a theoretical paper and does not involve empirical studies with datasets, thus no information on public dataset access is relevant or provided. |
| Dataset Splits | No | This is a theoretical paper and does not involve empirical studies with dataset splits. |
| Hardware Specification | No | This is a theoretical paper and does not describe any experimental setup involving specific hardware. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers used for its research. |
| Experiment Setup | No | This is a theoretical paper and does not describe any experimental setup details such as hyperparameters or training configurations. |