PASOA- PArticle baSed Bayesian Optimal Adaptive design
Authors: Jacopo Iollo, Christophe Heinkelé, Pierre Alliez, Florence Forbes
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments confirm the potential of the approach, which outperforms other recent existing procedures. |
| Researcher Affiliation | Academia | 1Université Grenoble Alpes, Inria, CNRS, GINP, France 2Université Côte d Azur, Inria, France 3Cerema, Endsum-Strasbourg, France. |
| Pseudocode | Yes | Algorithm 1 SG with minibatches(Nt)t=1:T at step k to optimize Ik P CE in (9), Algorithm 2 Adaptive tempered SMC at step k, Algorithm 3 Particle EIG contrastive bound stochastic optimization at step k + 1. |
| Open Source Code | Yes | Our code is implemented in Jax (Bradbury et al., 2020) and available at github.com/iolloj/pasoa. |
| Open Datasets | Yes | To benchmark our method in terms of information gained, we use the sequential prior contrastive estimation (SPCE) and sequential nested Monte Carlo (SNMC) bounds introduced in (Foster et al., 2021) and used in (Blau et al., 2022). For the 2D location finding experiment used in (Foster et al., 2021; Blau et al., 2022). In this other model (Blau et al., 2022; Foster et al., 2020), an agent compares two baskets of goods... |
| Dataset Splits | No | The paper describes a sequential design process with simulated observations and does not mention traditional train/validation/test dataset splits for model training. The experiments involve optimizing designs iteratively, not splitting a static dataset into these partitions. |
| Hardware Specification | Yes | Our method can be run on a local machine and was tested on a Apple M1 Pro 16Gb chip. However, for a faster running time, each experiment was finally produced by running our method on a single Nvidia V100 GPU. |
| Software Dependencies | No | The paper mentions software like "Jax", "Optax", "Black Jax", and "OTT", but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | the number of gradient steps was set to 5000 and the ESS for the SMC procedure to 0.9. The Adam algorithm (Kingma & Ba, 2015) is then used with standard hyperparameters to perform the stochastic gradient. |