Sqrt(d) Dimension Dependence of Langevin Monte Carlo
Authors: Ruilin Li, Hongyuan Zha, Molei Tao
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our theoretical analysis is further validated by numerical experiments. ... 5 NUMERICAL EXAMPLES This section numerically verifies our theoretical findings for LMC in Section 4, with a particular focus on the dependence of the discretization error in Theorem 4.1 on dimension d and step size h. |
| Researcher Affiliation | Academia | Ruilin Li Georgia Institute of Technology ruilin.li@gatech.edu Hongyuan Zha School of Data Science Shenzhen Institute of Artificial Intelligence and Robotics for Society The Chinese University of Hong Kong, Shenzhen zhahy@cuhk.edu.cn Molei Tao Georgia Institute of Technology mtao@gatech.edu |
| Pseudocode | No | The paper describes methods in text but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about the release of their source code or a link to a code repository for the described methodology. |
| Open Datasets | No | The paper uses synthetic potentials f1(x) and f2(x) for numerical examples, which are mathematical functions rather than publicly accessible datasets with concrete access information. |
| Dataset Splits | No | The paper describes simulation parameters and repetition counts for numerical examples with synthetic potentials, but does not involve data splitting into explicit training, validation, or test sets as would be typical for empirical datasets. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions TensorFlow as an example of a machine learning system where LMC is implemented, but it does not specify any software dependencies with version numbers used for their own experimental setup. |
| Experiment Setup | Yes | To study the dimension dependence of sampling error, we fix step size h = 0.1, and for each d ∈ {1, 2, 5, 10, 20, 50, 100, 200, 500, 1000}, we simulate 10^4 independent Markov chains using LMC algorithm for 100 iterations, which is long enough for the chain to be well-mixed. ... To study step size dependence of sampling error, we fix d = 10 and experiment with step size h ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} × 10^−1. |