Robust Neural Posterior Estimation and Statistical Model Criticism

Authors: Daniel Ward, Patrick Cannon, Mark Beaumont, Matteo Fasiolo, Sebastian Schmon

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We assess the approach on a range of artificially misspecified examples, and find RNPE performs well across the tasks, whereas naïvely using NPE leads to misleading and erratic posteriors.
Researcher Affiliation Collaboration 1School of Mathematics, Bristol University, UK 2Improbable, UK 3School of Biological Sciences, Bristol University, UK 4Department of Mathematical Sciences, Durham University, UK
Pseudocode Yes Pseudo-code for the overall approach is given in Algorithm 1.
Open Source Code Yes The code required to reproduce all the results from this manuscript is available at https://github.com/danielward27/rnpe.
Open Datasets No The paper describes generating 'N = 50,000 simulations' and '1000 different observations and ground truth parameter pairs' for its tasks, but does not refer to or provide access information for any pre-existing, publicly available datasets.
Dataset Splits No The paper mentions training on 'N = 50,000 simulations' and evaluating on '1000 different observations', but does not specify explicit train/validation/test splits for model training.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments in the provided text.
Software Dependencies No The paper mentions software like 'Num Pyro python package', 'JAX', 'Equinox', and 'Numba', but does not specify their version numbers for reproducibility.
Experiment Setup Yes For all experiments, we used N = 50,000 simulations, with M = 100,000 MCMC samples following 20,000 warm up steps. The MCMC chains were initialised using a random simulation, and zj = 1 for j = 1, . . . , D. To build the approximation q(x), we used block neural autoregressive flows (De Cao et al., 2020). For the approximation of q(θ | x), we used neural spline flows (Durkan et al., 2019). For all tasks the hyperparameters were kept consistent; information on hyperparameter choices can be found in Appendix C.