Variational methods for simulation-based inference
Authors: Manuel Glöckler, Michael Deistler, Jakob H. Macke
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the accuracy and the computational efficiency of SNVI on several examples. First, we apply SNVI to an illustrative example to demonstrate its ability to capture complex posteriors without mode-collapse. Second, we compare SNVI to alternative methods on several benchmark tasks. Third, we demonstrate that SNVI can obtain the posterior distribution in models with many parameters by applying it to a neuroscience model of the pyloric network in the crab Cancer borealis. |
| Researcher Affiliation | Academia | Manuel Gl ockler University of T ubingen Michael Deistler University of T ubingen Jakob H. Macke University of T ubingen |
| Pseudocode | Yes | Algorithm 1: SNVI |
| Open Source Code | Yes | The results shown in this paper can be reproduced with the git repository https: //github.com/mackelab/snvi_repo. The algorithms developed in this work are also available in the sbi toolbox (Tejero-Cantero et al., 2020). |
| Open Datasets | Yes | We use the two moons simulator (Greenberg et al., 2019) to illustrate the ability of SNVI to capture complex posterior distributions. and We compare the accuracy and computational cost of SNVI to that of previous methods, using SBI benchmark tasks (Lueckmann et al., 2021) and The experimental data is taken from file 845 082 0044 in a publicly available dataset (Haddad & Marder, 2021). |
| Dataset Splits | No | The paper refers to using 'SBI benchmark tasks' which might define dataset splits, but this paper does not explicitly state the training, validation, or test dataset splits (e.g., percentages or sample counts) within its own text. |
| Hardware Specification | Yes | All simulations and runs were performed on a high-performance computer. For each run, we used 16 CPU cores (Intel family 6, model 61) and 8GB RAM. |
| Software Dependencies | No | The paper mentions software components such as 'hydra', 'sbi toolbox', and 'pyro', but it does not provide specific version numbers for these software dependencies in the text. |
| Experiment Setup | Yes | We use a conditional autoregressive normalizing flow for ℓ(x|θ) (Papamakarios et al., 2017; Kingma et al., 2016; Durkan et al., 2019a). For SNRE we use a two block residual network with 50 hidden units. [...] We always use a standard Gaussian base distribtuion and five autoregressive layers with a hidden size depending on input dimension ([dim 10, dim 10] for spline autoregressive nets and [dim 5+5] for affine autoregressive nets, each with Re LU activations). We used a total sampling budget of N = 256 for any VI loss. To estimate the IW-ELBO we use N = 32 to estimate L(K=8) IW (φ). [...] We train the posterior model for each round for at least 100 iterations and at most 1000 iterations. |