Adaptive Online Experimental Design for Causal Discovery
Authors: Muhammad Qasim Elahi, Lai Wei, Murat Kocaoglu, Mahsa Ghasemi
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct a series of experiments using random DAGs and the SACHS Bayesian network from bnlibrary (Scutari, 2009) to compare our algorithm with other baselines. The results show that our algorithm outperforms the baselines, requiring fewer samples. |
| Researcher Affiliation | Academia | 1School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana, USA 2Life Sciences Institute, University of Michigan, Ann Arbor, Michigan, USA. |
| Pseudocode | Yes | Algorithm 1: Track-and-stop Causal Discovery |
| Open Source Code | Yes | The code to reproduce our experimental results and for running the baseline algorithms and our track-and-stop discovery algorithm is available at https://github.com/CausalML-Lab/Track-and-Stop-Discovery. |
| Open Datasets | Yes | We also evaluate the performance of causal discovery algorithms using the SACHS Bayesian network from the Discrete Bayesian Networks Repository in the bnlearn library (Scutari, 2009). |
| Dataset Splits | No | The paper describes generating graphs and evaluating performance but does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper mentions running 'simulations' and 'experiments' but does not specify any particular hardware components like GPU or CPU models, or cloud computing resources used. |
| Software Dependencies | No | The paper mentions using the 'Causal Discovery Toolbox (Kalainathan et al., 2020)' and 'bnlearn library (Scutari, 2009)' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | No | The paper describes the data generation process and the overall algorithm but does not specify concrete hyperparameters (e.g., learning rate, batch size, epochs) or detailed training configurations for the algorithm itself. |