Causal Discovery with Latent Confounders Based on Higher-Order Cumulants
Authors: Ruichu Cai, Zhiyi Huang, Wei Chen, Zhifeng Hao, Kun Zhang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results illustrate the asymptotic correctness and effectiveness of the proposed method. |
| Researcher Affiliation | Academia | 1School of Computer Science, Guangdong University of Technology, Guangzhou, China 2Peng Cheng Laboratory, Shenzhen, China 3College of Engineering, Shantou University, Shantou, China 4Department of Philosophy, Carnegie Mellon University, Pittsburgh, PA, United States 5Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates. |
| Pseudocode | Yes | Algorithm 1 Estimating the canonical lv Li NGAM model |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of the described methodology. |
| Open Datasets | No | The paper mentions "Hong Kong Stock Market Data" and "Synthetic Data" but does not provide a specific link, DOI, repository name, or formal citation (with author/year) for public access to these datasets. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | No | The paper describes how synthetic data is generated (e.g., causal coefficient sampling, noise generation) but does not provide specific experimental setup details such as hyperparameters, optimizer settings, or training configurations for its algorithm. |