Exponential ergodicity of mirror-Langevin diffusions

Authors: Sinho Chewi, Thibaut Le Gouic, Chen Lu, Tyler Maunu, Philippe Rigollet, Austin Stromme

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Numerical experiments In this section, we examine the numerical performance of the Newton-Langevin Algorithm (NLA)... Figure 4 compares the performance of NLA to that of the Unadjusted Langevin Algorithm (ULA) [DM+19] and of the Tamed Unadjusted Langevin Algorithm (TULA) [Bro+19].
Researcher Affiliation Academia Sinho Chewi MIT schewi@mit.edu Thibaut Le Gouic MIT tlegouic@mit.edu Chen Lu MIT chenl819@mit.edu Tyler Maunu MIT maunut@mit.edu Philippe Rigollet MIT rigollet@mit.edu Austin Stromme MIT astromme@mit.edu
Pseudocode Yes In this section, we examine the numerical performance of the Newton-Langevin Algorithm (NLA), which is given by the following Euler discretization of NLD: V (Xk+1) = (1 h) V (Xk) + 2h [ 2V (Xk)] 1/2ξk, (NLA)
Open Source Code No The paper does not provide any statement or link regarding the public availability of its source code.
Open Datasets No The paper mentions 'sampling from an ill-conditioned generalized Gaussian distribution on R100' and 'sampling from the uniform distribution on a convex body C', but does not provide concrete access information (link, DOI, repository, or citation) for a specific public dataset used in experiments.
Dataset Splits No The paper does not provide specific details on train, validation, or test dataset splits (e.g., percentages, sample counts, or predefined split references).
Hardware Specification No The paper does not provide any specific hardware details such as CPU/GPU models, processor types, or memory used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with their respective versions) used for implementation or experiments.
Experiment Setup Yes Figure 4 compares the performance of NLA to that of the Unadjusted Langevin Algorithm (ULA) [DM+19] and of the Tamed Unadjusted Langevin Algorithm (TULA) [Bro+19]. We run the algorithms 50 times and compute running estimates for the mean and scatter matrix of the family following [ZWG13]. and NLA, h = 0.2 ULA, h = 0.2 TULA, h = 0.2 NLA, h = 0.05 ULA, h = 0.05 TULA, h = 0.05 (from Figure 4 legend) and in Section 4.2 For NLA, we take e V (x) = log(1 x2 1) log(a2 x2 2) and β = 10 4.