Double-Loop Unadjusted Langevin Algorithm

Authors: Paul Rolland, Armin Eftekhari, Ali Kavis, Volkan Cevher

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This work proposes a new annealing step-size schedule for ULA, which allows to prove new convergence guarantees for sampling from a smooth log-concave distribution, which are not covered by existing state-of-the-art convergence guarantees. To establish this result, we derive a new theoretical bound that relates the Wasserstein distance to total variation distance between any two log-concave distributions that complements the reach of Talagrand T2 inequality.
Researcher Affiliation Academia 1LIONS, Ecole Polytechnique Fédérale de Lausanne, Switzerland 2Department of Mathematics and Mathematical Statistics, Umea University, Sweden.
Pseudocode Yes Algorithm 1 Double-loop Unadjusted Langevin Algorithm (DL-ULA) and Algorithm 2 DL-MYULA are provided.
Open Source Code No The paper does not contain any statement about making code open source or providing a link to a code repository.
Open Datasets No The paper is theoretical and focuses on analyzing sampling algorithms for probability distributions, rather than conducting experiments on specific datasets. Therefore, there is no mention of training data.
Dataset Splits No The paper is theoretical and does not involve empirical validation on datasets, thus no validation splits are mentioned.
Hardware Specification No The paper does not provide any specific details about the hardware used for computational work or experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup No The paper defines parameters for its algorithms (e.g., 'nk = LM 2dk2e3k', 'γk = Lde 2k') as part of the theoretical analysis, but these are not 'experimental setup' details in the sense of hyperparameters for an empirical evaluation (e.g., learning rates, batch sizes, optimizers).