Optimal randomized multilevel Monte Carlo for repeatedly nested expectations

Authors: Yasa Syed, Guanyang Wang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our algorithm on three examples. Our comparison result is summarized in Figure 1.
Researcher Affiliation Academia 1Department of Statistics, Rutgers University, New Brunswick, United States.
Pseudocode Yes Algorithm 1 A recursive r MLMC algorithm for RNEs
Open Source Code Yes Our code is available at https://github. com/guanyangwang/r MLMC_RNE.
Open Datasets No The paper uses synthetic data generated from specified distributions and financial models (e.g., 'y(0) N(π/2, 1)', 'non-central t-distribution'), but does not utilize or provide access information for any publicly available datasets.
Dataset Splits No The paper does not provide details on specific train/validation/test dataset splits. Experiments are conducted on simulated processes or financial models rather than standard datasets with such splits.
Hardware Specification No The paper mentions running experiments on a '500-core cluster' but does not provide specific hardware details such as CPU/GPU models, memory, or exact cluster specifications.
Software Dependencies No The paper does not specify any software dependencies with version numbers required to replicate the experiments.
Experiment Setup Yes For READ, since all assumptions in Theorem 2.2 are satisfied, therefore when r0 (1/2, 3/4) and r1 (1/2, 1 2 4/3), the READ estimator generated by Algorithm 1 is unbiased and of finite variance. Since the computational cost gets lower when each ri gets larger, we choose r0 = 0.74 and r1 = 0.6 (close to the upper-end of their respective ranges above) to facilitate the computational efficiency. We also adopt the standard parameters in (Jain & Oosterlee, 2012; Bender et al., 2006; Zhou et al., 2022): T = 3, M = 5, σ = 0.2, r = 0.05, K = y(0) i = 100 for every i.