Ancestral Instrument Method for Causal Inference without Complete Knowledge
Authors: Debo Cheng, Jiuyong Li, Lin Liu, Jiji Zhang, Thuc Duy Le, Jixue Liu
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on synthetic and realworld datasets demonstrate the performance of the algorithm in comparison with existing IV methods. |
| Researcher Affiliation | Academia | 1 STEM, University of South Australia, Adelaide, SA, Australia 2 Department of Religion and Philosophy, Hong Kong Baptist University, Hong Kong, China |
| Pseudocode | Yes | Ancestral IV estimator in PAG (AIVi P) as shown in Algorithm 1 |
| Open Source Code | No | The paper states "Appendices of the paper are available at https://arxiv.org/abs/2201.03810" which is a link to the paper itself, not source code. It mentions retrieving implementation for baseline methods but does not provide source code for its own method. |
| Open Datasets | Yes | We generate two groups of synthetic datasets... The details of the data generating process are provided in Appendix C. ... Vit D [Martinussen and others, 2019], Schoolingreturn [Card, 1993] and 401(k) data [Verbeek, 2008]. |
| Dataset Splits | No | The paper describes generating synthetic datasets with various sample sizes and evaluates performance on real-world datasets, but it does not explicitly provide details on train/validation/test splits (e.g., percentages or counts) or cross-validation setups for reproducibility of data partitioning. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions R packages used (e.g., stats, ivtools, pcalg) and functions within them, but does not specify their version numbers, which is necessary for reproducible software dependencies. |
| Experiment Setup | No | The paper states "The significance level is set to 0.05 for r FCI used by AIVi P." and mentions some parameters for baseline methods, but it does not provide comprehensive experimental setup details such as learning rates, batch sizes, number of epochs, or optimizer settings for its own method's training process. |