Some General Identification Results for Linear Latent Hierarchical Causal Structure

Authors: Zhengming Chen, Feng Xie, Jie Qiao, Zhifeng Hao, Ruichu Cai

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we applied the proposed algorithm to synthetic data to learn the latent hierarchical causal graph. Specifically, we considered different types of latent graphs and different sample sizes (with N = 2k, 5k, 10k), where structures are provided in Fig. 3 (Measurement Model and Latent Tree) and Fig. 1 (Hierarchical Model). The experimental results were reported in Table 1. Our method gives the best results on all types of graphs, indicating that it can handle not only the tree-based and measurement-based structures but also the latent hierarchical structure.
Researcher Affiliation Academia 1School of Computer Science, Guangdong University of Technology, Guangzhou, China 2Department of Applied Statistics, Beijing Technology and Business University, Beijing, China 3College of Science, Shantou University, Shantou, Guangdong, China 4Peng Cheng Laboratory, Shenzhen 518066, China
Pseudocode Yes Algorithm 1 Causal Discovery in LHM; Algorithm 2 Find Causal Clusters; Algorithm 3 Introduce Latent Variables; Algorithm 4 Update Causal Skeleton; Algorithm 5 Orient Edges.
Open Source Code No The paper does not contain any statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets No The paper states, "We applied the proposed algorithm to synthetic data," but it does not provide any specific source, link, citation, or repository information for accessing this synthetic data.
Dataset Splits No The paper states "different sample sizes (with N = 2k, 5k, 10k)" for synthetic data but does not specify how these samples were split into training, validation, or test sets.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper does not provide any specific software dependencies or version numbers (e.g., Python, PyTorch, TensorFlow, or specific solvers).
Experiment Setup No The paper describes how the synthetic data was generated, stating "The causal strength was generated uniformly from [ 2.5, 0.5] [0.5, 2.5], and the noise term either follows a Gaussian distribution (...) or a uniform distribution U( 2, 2)." It also mentions that "Each experiment was repeated ten times with randomly generated data." However, it does not provide details on the experimental setup related to the algorithm itself, such as hyperparameters, model initialization, or specific training configurations.