Stein $\Pi$-Importance Sampling

Authors: Congye Wang, Ye Chen, Heishiro Kanagawa, Chris J. Oates

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The approach is stress-tested using the recently released Posterior DB suite of benchmark tasks in Section 4, before concluding with a discussion in Section 5.
Researcher Affiliation Academia Congye Wang1, Wilson Ye Chen2, Heishiro Kanagawa1, Chris. J. Oates1 1 Newcastle University, UK 2 University of Sydney, Australia
Pseudocode Yes Algorithm 1 Π-Invariant Metropolis-Adjusted Langevin Algorithm (MALA); Algorithm 2 Stein Π-Importance Sampling (SΠIS-MALA); Algorithm 3 Stein Π-Thinning (SΠT-MALA)
Open Source Code Yes All experiments that we report can be reproduced using code available at https://github.com/congyewang/Stein-Pi-Importance-Sampling.
Open Datasets Yes To introduce objectivity into our assessment, we exploited the recently released Posterior DB benchmark (Magnusson et al., 2022).
Dataset Splits No The paper does not describe dataset splits for training, validation, or testing.
Hardware Specification No The paper does not explicitly describe the hardware used to run the experiments.
Software Dependencies Yes The linearly-constrained quadratic programme in Algorithm 2 was solved using the Python v3.10.4 packages qpsolvers v3.4.0 and proxsuite v0.3.7. [...] The test problems in Posterior DB are defined in the Stan probabilistic programming language, and so Bridge Stan (Roualdes et al., 2023) was used to directly access posterior densities and their gradients as required. [...] The version of Stan that we used was Stanc3 Version 2.31.0 (Unix).
Experiment Setup Yes For all experiments that we report using MALA, we set ϵ0 = 1, M0 = Id, h = 10, and α1 = = α9 = 0.3. The warm-up epoch lengths were n0 = = n8 = 1, 000 and the final epoch length was n9 = 105.