Efficient Amortised Bayesian Inference for Hierarchical and Nonlinear Dynamical Systems

Authors: Geoffrey Roeder, Paul Grant, Andrew Phillips, Neil Dalchau, Edward Meeds

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically validate our method by predicting the dynamic behaviour of bacteria that were genetically engineered to function as biosensors. and 5. Experiments We performed two experiments on data from the synthetic biology case study, where measurements of six genetic devices (Pcat-Pcat, RS100-S32, RS100-S34, R33-S32, R33-S175 and R33-S34) were combined into a collection of 312 time-series.
Researcher Affiliation Collaboration 1Microsoft Research, Cambridge, United Kingdom 2Princeton University, Princeton, United States of America.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it contain an explicit code release statement or repository link.
Open Datasets No The paper describes using a custom dataset of '312 time-series' from bacterial cell cultures for its experiments, but it does not provide concrete access information (link, DOI, repository name, or formal citation) for this dataset to be publicly available.
Dataset Splits Yes To perform 4-fold cross-validation, black-box models take approximately 40 minutes and white-box models approximately 2 hrs.
Hardware Specification No The paper does not provide specific hardware details (such as exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software components like 'tensorflow' and 'Adam optimisation' but does not provide specific version numbers for these or any other ancillary software dependencies, which is required for reproducibility.
Experiment Setup Yes For q(z I|Y, g), we use the same encoder NN for both whitebox and black-box models: 10 1D convolutional filters, feeding 50 unit hidden layer with tanh activations; g is concatenated to the hiddens, which is then connected to the mean and variances outputs of q. Neural networks !+ all have 25 hidden units. During training we used a K=100 importance weighted auto-encoder (IWAE) estimator for gradient computation (Burda et al., 2015). We ran all experiments for 500 epochs using Adam optimisation (Kingma & Ba, 2014). We first ran 4-fold cross-validation with 500 epochs and batch size 36...