Active causal structure learning with advice

Authors: Davin Choo, Themistoklis Gouleakis, Arnab Bhattacharyya

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental While our main contributions are theoretical, we also performed some experiments to empirically validate that our algorithm is practical, outperforms the advice-free baseline when the advice quality is good, and still being at most a constant factor worse when the advice is poor. ... Figure 4 shows one of the experimental plots; more detailed experimental setup and results are given in Appendix I.
Researcher Affiliation Academia 1School of Computing, National University of Singapore. Correspondence to: Davin Choo <davin@u.nus.edu>.
Pseudocode Yes Algorithm 1 Adaptive search algorithm with advice.
Open Source Code Yes Source code implementation and experimental scripts are available at https://github.com/cxjdavin/ active-causal-structure-learning-with-advice.
Open Datasets Yes For experiments, we evaluated our advice algorithm on the synthetic graph instances of (Wien obst et al., 2021b): ... Instances are available at https://github.com/mwien/CliquePicking/tree/master/aaai_experiments
Dataset Splits No The paper describes evaluating an algorithm on synthetic graph instances by sampling DAGs and computing verifying sets, but it does not specify train/validation/test dataset splits as it's not a typical supervised learning setup.
Hardware Specification Yes All experiments were run on a laptop with Apple M1 Pro chip and 16GB of memory.
Software Dependencies No The paper mentions that "The uniform sampling code of (Wien obst et al., 2021b) is written in Julia". However, it does not specify version numbers for Julia or any other key software components or libraries used for their own implementation, which is required for reproducibility.
Experiment Setup No The paper provides details about the experimental design such as the number of sampled advice DAGs (m=1000) and graph sizes (n={16, 32, 64}). However, it does not provide specific hyperparameters or system-level training settings typically associated with machine learning models, as the paper focuses on algorithmic evaluation rather than model training.