Adversarial robustness of amortized Bayesian inference
Authors: Manuel Gloeckler, Michael Deistler, Jakob H. Macke
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4. Experimental results. We first evaluated the robustness of Neural Posterior Estimation (NPE) and the effect of FIM-regularization on six benchmark tasks (details in Sec. A1.2). |
| Researcher Affiliation | Academia | 1Machine Learning in Science, University of T ubingen and T ubingen AI Center, T ubingen, Germany 2Max Planck Institute for Intelligent Systems, Department Empirical Inference, T ubingen, Germany. |
| Pseudocode | Yes | Algorithm 1 FIM-regularized NPE |
| Open Source Code | Yes | Code to reproduce results is available at https://github.com/mackelab/RABI. |
| Open Datasets | Yes | VAE: The decoder gψ(x) of a Variational Autoencoder (VAE) was used as a generative model for handwritten digits (Kingma & Welling, 2014). |
| Dataset Splits | Yes | To prevent overfitting, we used early stopping based on a validation loss evaluated on 512 hold-out samples. |
| Hardware Specification | No | The paper does not specify the hardware used for running the experiments (e.g., specific GPU or CPU models, memory details). |
| Software Dependencies | No | The paper mentions "Py Torch" and "hydra" but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | We trained each model with the Adam optimizer with a learning rate of 10^-3, a batch size of 512, and a maximum of 300 epochs. |