CUTS: Neural Causal Discovery from Irregular Time-Series Data
Authors: Yuxiao Cheng, Runzhao Yang, Tingxiong Xiao, Zongren Li, Jinli Suo, Kunlun He, Qionghai Dai
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 EXPERIMENTS Datasets. We evaluate the performance of the proposed causal discovery approach CUTS on both numerical simulation and real-scenario inspired data. ... In terms of quantitative evaluation, we use area under the ROC curve (AUROC) as the criterion. |
| Researcher Affiliation | Academia | 1Department of Automation, Tsinghua University 2Institute for Brain and Cognitive Science, Tsinghua University (THUIBCS) 3Chinese PLA General Hospital |
| Pseudocode | Yes | A.5 PSEUDOCODE FOR CUTS We provide the pseudocode of two boosting modules of the proposed CUTS in Algorithm 1 and 2 respectively, and the whole iterative framework in 3. |
| Open Source Code | Yes | Our code is publicly available at https://github.com/jarrycyx/unn. |
| Open Datasets | Yes | The simulated datasets come from a linear Vector Autoregressive (VAR) model and a nonlinear Lorenz-96 model (Karimi & Paul, 2010), while the real-scenario inspired datasets are from Net Sim (Smith et al., 2011), an f MRI dataset describing the connecting dynamics of 15 human brain regions. ... We use data from 10 humans in Net Sim datasets2, which is generated with synthesized dynamics of brain region connectivity and unknown to us and the algorithm. 2Shared at https://www.fmrib.ox.ac.uk/datasets/netsim/sims.tar.gz ... DREAM-3 (Prill et al., 2010) is a gene expression and regulation dataset mentioned in many causal discovery works as quantitative benchmarks (Khanna & Tan, 2020; Tank et al., 2022). |
| Dataset Splits | No | The paper refers to datasets used for experiments but does not provide specific train/validation/test dataset splits, percentages, or explicit sample counts for reproducibility of the data partitioning. |
| Hardware Specification | Yes | The experiments are deployed on a server with Intel Core CPU and NVIDIA RTX3090 GPU. |
| Software Dependencies | No | The paper mentions using "Adam optimizer" and refers to external repositories for baseline algorithms, but it does not provide specific version numbers for key software components or libraries (e.g., Python, PyTorch, CUDA). |
| Experiment Setup | Yes | Parameter Settings. During training the τ value for Gumbel Softmax is initially set to a relatively high value and annealed to a low value in the first n1+n2 epochs and then reset for the last n3 epochs. The learning rates for Latent data prediction stage and Causal graph fitting stage are respectively set as lrdata and lrgraph and gradually scheduled to 0.1lrdata and 0.1lrgraph during all n1 + n2 + n3 epochs. The detailed hyperparameter settings are listed in Appendix Section A.3. |