Causal Discovery with Cascade Nonlinear Additive Noise Model
Authors: Ruichu Cai, Jie Qiao, Kun Zhang, Zhenjie Zhang, Zhifeng Hao
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Simulation results illustrate the power of the proposed method in identifying indirect causal relations across various settings, and experimental results on real data suggest that the proposed model and method greatly extend the applicability of causal discovery based on functional causal models in nonlinear cases. |
| Researcher Affiliation | Collaboration | 1School of Computers, Guangdong University of Technology, China 2Department of philosophy, Carnegie Mellon University 3Singapore R&D, Yitu Technology Ltd. 4School of Mathematics and Big Data, Foshan University, China |
| Pseudocode | Yes | Algorithm 1 Inferring causal direction with CANM |
| Open Source Code | Yes | Code for CANM is available online1. 1https://github.com/DMIRLAB-Group/CANM |
| Open Datasets | Yes | The electricity consumption dataset [Prestwich et al., 2016] has 9504-hour measurements from the energy industry... The stock market dataset is collected by T ubingen causal effect benchmark (https://webdav.tuebingen.mpg.de/ cause-effect/) as pairs 66-67. |
| Dataset Splits | No | The paper mentions splitting data into training and test sets in Algorithm 1 ('Split the data into training and test sets;'), but it does not specify any validation splits or percentages for these splits. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions software components like 'XGBoost' and 'Hilbert-Schmidt independence criterion (HSIC)', and 'Compare Causal Networks packages in R' but does not provide specific version numbers for any of them. |
| Experiment Setup | No | The paper describes the general design of the VAE and the two phases of the algorithm, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or other detailed training configuration settings for reproducibility. |