Amortized Monte Carlo Integration

Authors: Adam Golinski, Frank Wood, Tom Rainforth

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental It is therefore necessary to test its empirical performance to assert that gains are possible with inexact proposals. To this end, we investigate AMCI s performance on two illustrative examples. [...] As shown in Figure 1, AMCI outperformed SNIS in both the one- and five-dimensional cases. [...] Results are presented in Figure 2. AMCI again significantly outperformed the literature baseline of SNIS q2
Researcher Affiliation Academia Adam Goli nski * 1 2 Frank Wood 3 Tom Rainforth * 1 1Department of Statistics, University of Oxford, United Kingdom 2Department of Engineering Science, University of Oxford, United Kingdom 3Department of Computer Science, University of British Columbia, Vancouver, Canada.
Pseudocode No No pseudocode or algorithm blocks are present in the paper.
Open Source Code Yes An implementation for AMCI and our experiments is available at http://github.com/talesa/amci.
Open Datasets No We start with the conceptually simple problem of calculating tail integrals for Gaussian distributions, namely p(x) = N(x; 0, Σ1) p(y|x) = N(y; x, Σ2) (24) f(x; θ) = YD i=1 1xi>θi p(θ) = UNIFORM(θ; [0, u D]D where D is the dimensionality, we set Σ2 = I, and Σ1 is a fixed covariance matrix (for details see Appendix C). [...] To demonstrate how AMCI might be used in a more realworld scenario, we now consider an illustrative example relating to cancer diagnostic decisions. [...] A detailed description of the model and proposal setup is in the Appendix C.3.
Dataset Splits No Though the exact process varies with context, the inference network is usually trained either by drawing latent-data sample pairs from the joint p(x, y) (Paige & Wood, 2016; Le et al., 2017; 2018b), or by drawing mini-batches from a large dataset using stochastic variational inference approaches (Hoffman et al., 2013; Kingma & Welling, 2014; Rezende et al., 2014; Ritchie et al., 2016).
Hardware Specification No No specific hardware details (like GPU/CPU models, memory, or specific computing environments) are provided for running the experiments.
Software Dependencies No We use normalizing flows (Rezende & Mohamed, 2015) to construct our proposals, providing a flexible and powerful means of representing the target distributions.
Experiment Setup No Training was done by using importance sampling to generate the values of θ and x as per (22) with q (θ, x) = p(θ) HALFNORMAL(x; θ, diag(Σ2)).