Probabilistic Programs with Stochastic Conditioning

Authors: David Tolpin, Yuan Zhou, Tom Rainforth, Hongseok Yang

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate potential usage of stochastic conditioning on several case studies which involve various kinds of stochastic conditioning and are difficult to solve otherwise.In the case studies, we explore several problems cast as probabilistic programs with stochastic conditioning. We place y D above a rule to denote that distribution D is observed through y and is otherwise unknown to the model, as in (12). Some models are more natural to express in terms of the joint probability that they compute than in terms of distributions from which x is drawn and y is observed. In that case, we put the expression for the joint probability p(x, y) under the rule, as in (13). The code and data for the case studies are provided in repository https://bitbucket.org/dtolpin/ stochastic-conditioning.
Researcher Affiliation Academia 1Ben-Gurion University of the Negev 2Artificial Intelligence Research Center, DII 3University of Oxford 4School of Computing, KAIST. Correspondence to: David Tolpin <david.tolpin@gmail.com>.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The code and data for the case studies are provided in repository https://bitbucket.org/dtolpin/ stochastic-conditioning.
Open Datasets Yes This case study is inspired by Rubin (1983), also appearing as Section 7.6 in Gelman et al. (2013). The original case study evaluated Bayesian inference on the problem of estimating the total population of 804 municipalities of New York state based on a sample of 100 municipalities.
Dataset Splits No The paper describes using synthetic observations and samples from a population study, but it does not specify explicit training, validation, or test dataset splits.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments, nor does it specify any CPU or GPU models, or cloud resources.
Software Dependencies No The paper mentions implementing their work using 'Infergo (Tolpin, 2019)' but does not specify a version number for Infergo or any other software dependencies.
Experiment Setup Yes We fit the model using stochastic gradient Hamiltonian Monte Carlo and used 10 000 samples to approximate the posterior.We fit the model using pseudomarginal Metropolis-Hastings (Andrieu & Roberts, 2009) and used 10 000 samples to approximate the posterior.