Learning Causal Relations from Subsampled Time Series with Two Time-Slices
Authors: Anpeng Wu, Haoxuan Li, Kun Kuang, Zhang Keli, Fei Wu
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on both synthetic and real-world datasets demonstrate the superiority of our DHT-CIT algorithm. 5. Numerical Experiments |
| Researcher Affiliation | Collaboration | 1Department of Computer Science and Technology, Zhejiang University, Hangzhou, China 2Center for Data Science, Peking University, Beijing, China 3Huawei Noah s Ark Lab, Huawei, Shenzhen, China 4Shanghai Institute for Advanced Study, Zhejiang University, Shanghai, China 5Shanghai AI Laboratory, Shanghai, China. |
| Pseudocode | Yes | Algorithm 1 DHT-CIT: Descendant Hierarchical Topology with Conditional Independence Test |
| Open Source Code | Yes | The code of DHT-CIT is available at: https://github.com/anpwu/DHT-CIT. |
| Open Datasets | Yes | The PM-CMR (Wyatt et al., 2020) is a public time series dataset that is commonly used to study the impact of the particle (PM2.5, T) on the cardiovascular mortality rate (CMR, Y ) in 2132 counties in the US from 1990 to 2010. PM-CMR:https://pasteur.epa.gov/uploads/10.23719/1506014/SES_PM25_CMR_data.zip |
| Dataset Splits | No | The paper mentions generating synthetic data and using a sample size of 1000 for each replication, but it does not specify any training, validation, or test dataset splits or cross-validation methods. |
| Hardware Specification | Yes | Hardware used: Ubuntu 16.04.3 LTS operating system with 2 * Intel Xeon E5-2660 v3 @ 2.60GHz CPU (40 CPU cores, 10 cores per physical CPU, 2 threads per core), 256 GB of RAM, and 4 * Ge Force GTX TITAN X GPU with 12GB of VRAM. |
| Software Dependencies | Yes | Software used: Python 3.8 with cdt 0.6.0, ylearn 0.2.0, causal-learn 0.1.3, GPy 1.10.0, igraph 0.10.4, scikit-learn 1.2.2, networkx 2.8.5, pytorch 2.0.0. |
| Experiment Setup | Yes | In statistical hypothesis testing, α is typically set to 0.05 or 0.01. In this paper, we set the hyper-parameter α = 0.01 as the default. Algorithm 1 also lists input parameters: 'two significance threshold α = 0.01 and β = 0.001 for conditional independence test and pruning process'. |