On the Parameter Identifiability of Partially Observed Linear Causal Models
Authors: Xinshuai Dong, Ignavier Ng, Biwei Huang, Yuewen Sun, Songyao Jin, Roberto Legaspi, Peter Spirtes, Kun Zhang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical studies on both synthetic and real-world datasets validate our identifiability theory and the effectiveness of the proposed method in the finitesample regime. |
| Researcher Affiliation | Collaboration | Xinshuai Dong1* Ignavier Ng1* Biwei Huang2 Yuewen Sun3 Songyao Jin3 Roberto Legaspi4 Peter Spirtes1 Kun Zhang1,3 1Carnegie Mellon University 2University of California San Diego 3Mohamed bin Zayed University of Artificial Intelligence 4KDDI Research |
| Pseudocode | No | The paper describes methods and objectives for parameter estimation but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code: https://github.com/dongxinshuai/scm-identify. |
| Open Datasets | Yes | In this section, we employ a famous psychometric dataset Big Five dataset https: //openpsychometrics.org/, to validate our method. |
| Dataset Splits | No | The paper mentions using different sample sizes (2k, 5k, 10k) for synthetic data and a dataset of 20,000 data points for real-world data, but it does not explicitly provide training/validation/test dataset splits needed to reproduce the experiments. |
| Hardware Specification | Yes | We conduct all the experiments with single Intel(R) Xeon(R) CPU E5-2470. |
| Software Dependencies | No | The paper states 'Our code is based on Python3.7 and Py Torch [37]' which provides a version for Python but not for PyTorch, and does not list multiple key software components with their specific versions. |
| Experiment Setup | Yes | Data is standardized and the optimization in Eqs. (4), (5), and (7) are solved by Adam [27], with a learning rate of 0.02. We will rely on 30 random starts and choose the one with the best likelihood. |