CUTS+: High-Dimensional Causal Discovery from Irregular Time-Series

Authors: Yuxiao Cheng, Lianglong Li, Tingxiong Xiao, Zongren Li, Jinli Suo, Kunlun He, Qionghai Dai

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Compared to previous methods on simulated, quasi-real, and real datasets, we show that CUTS+ largely improves the causal discovery performance on high-dimensional data with different types of irregular sampling.
Researcher Affiliation Academia 1Department of Automation, Tsinghua University 2Institute for Brain and Cognitive Science, Tsinghua University (THUIBCS) 3Chinese PLA General Hospital
Pseudocode No The paper provides architectural diagrams (e.g., Figure 1) and mathematical formulations, but it does not include explicit pseudocode blocks or sections labeled as "Algorithm".
Open Source Code Yes Our code and supplementary materials is on https://github.com/ jarrycyx/UNN.
Open Datasets Yes Dream-3 (Prill et al. 2010) is a gene expression and regulation dataset widely used as causal discovery benchmarks (Khanna and Tan 2020; Tank et al. 2022).
Dataset Splits No For a fair comparison, we search the best hyperparameters for the baseline algorithms on the validation dataset, and test performances on testing sets for 5 random seeds per experiment. The paper mentions using a "validation dataset" but does not specify the exact split percentages or sample counts for training, validation, and test sets.
Hardware Specification No The paper does not explicitly describe the specific hardware used for running its experiments, such as GPU/CPU models or other detailed computer specifications.
Software Dependencies No The paper mentions using Python and various libraries implicitly through its methodology (e.g., neural networks, Gumbel-Softmax), but it does not provide specific version numbers for any software components or libraries.
Experiment Setup No The paper does not explicitly provide specific details about the experimental setup such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific optimizer settings in the main text. It mentions some details are in supplements but not in the main text.