Langevin Monte Carlo for strongly log-concave distributions: Randomized midpoint revisited
Authors: Lu Yu, Avetik Karagulyan, Arnak S. Dalalyan
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we compare the performance of LMC, KLMC, RLMC, and RKLMC algorithms. We apply the four algorithms to the posterior density of penalized logistic regression, defined by π(ϑ) exp( f(ϑ)), with the potential function 2 ϑ 2 + 1 ndata i=1 log(1 + exp( yix i ϑ)) , where λ > 0 denotes the tuning parameter. The data {xi, yi}m i=1, composed of binary labels yi { 1, 1} and features xi Rp generated from xi,j iid N(0, 1), N(0, 5), and N(0, 10), corresponding to the plots from left to right, respectively. In our experiments, we have chosen λ = 1/100, p = 3 and ndata = 100. Figure 1 shows the W2-distance measured along the first dimension between the empirical distributions of the samples from the four algorithms and the target distribution4, with different choices of h. These numerical results confirm our theoretical results. |
| Researcher Affiliation | Academia | Lu Yu CREST, ENSAE, IP Paris lu.yu@ensae.fr Avetik Karagulyan KAUST avetik.karagulyan@kaust.edu.sa Arnak Dalalyan CREST, ENSAE, IP Paris arnak.dalalyan@ensae.fr |
| Pseudocode | No | The paper describes algorithms through mathematical equations and textual explanations, but it does not include any clearly labeled pseudocode blocks or algorithm figures. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing open-source code for the described methodology, nor does it provide any links to a code repository. |
| Open Datasets | No | The paper describes generating synthetic data for its experiments: "The data {xi, yi}m i=1, composed of binary labels yi { 1, 1} and features xi Rp generated from xi,j iid N(0, 1), N(0, 5), and N(0, 10)". It does not reference a publicly available dataset with a citation, link, or repository. |
| Dataset Splits | No | The paper does not specify training, validation, or test dataset splits. It describes generating data for the numerical experiments but does not mention how this data was partitioned. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments, such as CPU or GPU models. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers needed to replicate the experiments. |
| Experiment Setup | Yes | In our experiments, we have chosen λ = 1/100, p = 3 and ndata = 100. ... Figure 1 shows the W2-distance measured along the first dimension between the empirical distributions of the samples from the four algorithms and the target distribution4, with different choices of h. |