Variational Bayesian Optimal Experimental Design
Authors: Adam Foster, Martin Jankowiak, Elias Bingham, Paul Horsfall, Yee Whye Teh, Thomas Rainforth, Noah Goodman
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show theoretically and empirically that these estimators can provide significant gains in speed and accuracy over previous approaches. We further demonstrate the practicality of our approach on a number of end-to-end experiments. |
| Researcher Affiliation | Collaboration | Department of Statistics, University of Oxford, Oxford, UK Uber AI Labs, Uber Technologies Inc., San Francisco, CA, USA Stanford University, Stanford, CA, USA |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | To maximize the space of potential applications and users for our estimators, we provide2 a general-purpose implementation of them in the probabilistic programming system Pyro [5], exploiting Pyro s first-class support for neural networks and variational methods. 2Implementations of our methods are available at http://docs.pyro.ai/en/stable/contrib.oed.html. |
| Open Datasets | No | The paper describes experiment design scenarios and models (e.g., A/B testing, preference, mixed effects, extrapolation) and mentions collecting data from human participants (Amazon Mechanical Turk), but does not provide specific links, DOIs, repositories, or formal citations for publicly available datasets used in all experiments. |
| Dataset Splits | No | The paper discusses running experiments and simulations but does not specify explicit training, validation, or test dataset splits (e.g., percentages, sample counts, or citations to predefined splits) in the main text. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions the use of 'the probabilistic programming system Pyro [5]' but does not provide specific version numbers for Pyro or any other software dependencies needed to replicate the experiments. |
| Experiment Setup | No | The paper mentions general aspects of the experimental setup such as a 'fixed computational budget' and 'K steps of stochastic gradient', and states that 'Full details of each model are presented in Appendix D', but the main text does not contain concrete hyperparameter values (e.g., learning rate, batch size) or detailed training configurations for the models used in the experiments. |