Implicit Deep Adaptive Design: Policy-Based Experimental Design without Likelihoods

Authors: Desi R Ivanova, Adam Foster, Steven Kleinegesse, Michael U. Gutmann, Thomas Rainforth

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of i DAD on a number of real-world experimental design problems and a range of baselines. A summary of all the methods that we consider is given in Table 1.
Researcher Affiliation Academia Desi R. Ivanova Adam Foster Steven Kleinegesse Michael U. Gutmann Tom Rainforth Department of Statistics, University of Oxford School of Informatics, University of Edinburgh desi.ivanova@stats.ox.ac.uk
Pseudocode Yes Algorithm 1: Implicit Deep Adaptive Design with (i DAD)
Open Source Code Yes Code for i DAD is publicly available at https://github.com/desi-ivanova/idad.
Open Datasets Yes Our next experiment is taken from the pharmacokinetics literature and has been studied in other recent works on BOED for implicit models [28, 63]. Specifically, we consider the compartmental model of [50]... Namely, we consider a formulation of the stochastic SIR model [10] that is based on stochastic differential equations (SDEs), as done by [29].
Dataset Splits No The paper does not explicitly provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or testing. It mentions training with 'simulated experimental histories' and 'simulated histories'.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. It only vaguely mentions 'measured on a CPU' in Table 4 without further specifications.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers. While it references 'Pytorch: An imperative style, high-performance deep learning library' [41], it does not specify its version or any other library versions used.
Experiment Setup Yes All neural networks are MLPs with 3 hidden layers of 256 nodes, unless specified otherwise. We use ReLU activations and Xavier uniform initialization. We train with the Adam optimizer [26] with learning rate 1e-4 and a batch size of 256 samples.