Design Amortization for Bayesian Optimal Experimental Design
Authors: Noble Kennamer, Steven Walton, Alexander Ihler
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform three types of experiments: amortization, model and architecture experiments. Our amortization experiment shows the dramatic increase in efficiency from amortization, and better EIG estimation provided by our more complex variational forms compared to those used in Foster et al. (2019). Model experiments examine how the benchmark method, NMC, breaks down as model complexity grows while our methods remain reliable for accurately estimating the EIG. Architecture experiments measure the impact of key componets in our variational approximation and serve as a guide to using our method effectively. |
| Researcher Affiliation | Academia | Noble Kennamer 1, Steven Walton 2, Alexander Ihler 1 1Department of Computer Science, University of California Irvine 2Department of Computer Science, University of Oregon nkenname@uci.edu |
| Pseudocode | No | The paper describes computational procedures (e.g., NMC, posterior estimator) but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We provide our code here1. 1https://github.com/Noble Kennamer/amortized boed |
| Open Datasets | No | The paper generates its own data for experiments based on Generalized Linear Models (GLMs) and does not use or provide access information for a publicly available dataset. For instance, 'During training, new designs are generated randomly from a multivariate normal distribution with identity covariance in dimension Np + 1.' |
| Dataset Splits | No | The paper describes generating random designs for training and evaluation but does not specify explicit training/validation/test dataset splits with percentages or counts, or reference predefined splits. |
| Hardware Specification | Yes | All training was done on a single Nvidia 2080TI and evaluation was done on an Intel I7-9800X with 64 GB of RAM. |
| Software Dependencies | No | The paper mentions software like 'Pyro' and 'NFlows' but does not specify their version numbers: 'Our implementations made significant use of Pyro (Bingham et al. 2019) to implement the inference procedures and NFlows (Durkan et al. 2020) to construct our conditional normalizing flows.' |
| Experiment Setup | Yes | For the neural network architecture we use attention layers for the set encoder, a residual network for the set emitter (He et al. 2016), a full rank Gaussian distribution for the conditional base of the normalzing flow and four affine coupling layers each parameterized with a residual network. ... For final evaluation we generate 50 new random designs and estimate the posterior bound with N = 5000 samples, while the VNMC bounds are estimated with N = 1000 and M = 31 samples and nested samples, and the NMC bounds are estimated with N = 30000 and M = 173 samples and nested samples. |