On Nesting Monte Carlo Estimators
Authors: Tom Rainforth, Rob Cornish, Hongseok Yang, Andrew Warrington, Frank Wood
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We derive corresponding rates of convergence and provide empirical evidence that these rates are observed in practice. We provide empirical assessment, and examining specific applications, we provide a unified investigation and practical guide on nesting MC estimators in a machine learning context. We demonstrate the applicability of our work by using our results to develop a new estimator for discrete Bayesian experimental design problems and derive error bounds for a class of variational objectives. Figure 2. Empirical convergence of NMC for (18). Figure 3. Convergence of NMC for cancer simulation. Figure 4. Empirical convergence of NMC to (20) for an increasing total sample budget T = N0N1N2. |
| Researcher Affiliation | Academia | 1Department of Statistics, University of Oxford 2Department of Engineering, University of Oxford 3School of Computing, KAIST 4Department of Computer Science, University of British Columbia. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific links or statements about releasing source code for the described methodology. |
| Open Datasets | No | The paper uses analytically calculable problems and simulated models (like the cancer treatment simulator), but does not provide access information (links, citations) for any publicly available or open datasets for training. |
| Dataset Splits | No | The paper does not provide specific details on training, validation, or test dataset splits. The experiments involve simulating systems or using analytic models rather than splitting a dataset. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers. |
| Experiment Setup | No | While the paper describes how sampling parameters (N, M) were set for different experiments, it does not provide typical machine learning experimental setup details such as hyperparameters (learning rate, batch size), model initialization, dropout rates, or training schedules. |