Differentiable Multi-Target Causal Bayesian Experimental Design
Authors: Panagiotis Tigas, Yashas Annadani, Desi R. Ivanova, Andrew Jesson, Yarin Gal, Adam Foster, Stefan Bauer
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that our proposed method outperforms baselines and existing acquisition strategies in both single-target and multi-target settings across a number of synthetic datasets. |
| Researcher Affiliation | Collaboration | 1OATML, University of Oxford 2KTH Stockholm, Sweden 3Helmholtz AI 4Department of Statistics, University of Oxford 5Microsoft Research 6TU Munich 7CIFAR Azrieli Global Scholar. |
| Pseudocode | Yes | Algorithm 1: Differentiable CBED (Diff CBED) |
| Open Source Code | Yes | Code available at: https://github.com/yannadani/Diff CBED. |
| Open Datasets | Yes | We evaluate our proposed design framework on semi-synthetic setting based on the DREAM gene networks (Greenfield et al., 2010). [...] In the synthetic data experiments, we focus on Erd os-Rényi graph model. |
| Dataset Splits | No | The paper mentions 'Number of training steps per batch' and 'Number of starting samples (observational)' in Table 3, and discusses 'evaluation' of performance, but it does not specify explicit dataset splits for training, validation, and testing (e.g., 80/10/10 split or specific sample counts for each partition). |
| Hardware Specification | No | The paper mentions 'Berzelius computing from NSC Sweden for providing computational resource for some of the experiments of the paper.' However, this does not provide specific hardware details like CPU/GPU models, memory, or precise cloud instance types. |
| Software Dependencies | No | The paper mentions 'JAX (jax)' and 'pcalg R implementation https://github.com/cran/pcalg/blob/master/R/gies.R' but does not specify the version numbers for these software components. |
| Experiment Setup | Yes | Table 3: Table indicating the hyperparameters and optimizer settings for different experimental results. This includes 'L', 'Relaxation temperature', 'Optimizer (Adam)', 'Learning rate of optimizer', 'Number of starting samples (observational)', and 'Number of training steps per batch' with specific values. |