Jacobian Regularizer-based Neural Granger Causality

Authors: Wanqi Zhou, Shuanghao Bai, Shujian Yu, Qibin Zhao, Badong Chen

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our proposed approach achieves competitive performance with the state-of-the-art methods for learning summary Granger causality and full-time Granger causality while maintaining lower model complexity and high scalability. In this section, we demonstrate the performance of the proposed methods, i.e., JRNGC-L1, JRNGC-F on five widely used benchmarks: the VAR model, the Lorenz-96 model, f MRI data, the DREAM-3 dataset and Causal Time (Cheng et al., 2024b).
Researcher Affiliation Academia 1Institute of Artificial Intelligence and Robotics, Xi an Jiaotong University, China. 2RIKEN AIP, Japan. 3Vrije Universiteit Amsterdam, Netherlands.
Pseudocode No The paper describes its proposed model and regularizer using textual descriptions and mathematical equations, but it does not include any explicitly formatted pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/Elle ZWQ/JRNGC.
Open Datasets Yes VAR model. Vector autoregressions (VAR) model is defined as: ... where x is D-dimensional time series... Lorenz-96. It is a classic chaotic dynamical model, which describes the non-linear interaction between variables... f MRI data. Stephen M. Smith et al. generated rich, realistic simulated f MRI data ... (Smith et al., 2011). DREAM-3 dataset. It is a realistic gene expression data set from the DREAM-3 challenge (Prill et al., 2010). Causal Time. Causal Time is proposed by (Cheng et al., 2024b) to evaluate time-series causal discovery algorithms in real applications.
Dataset Splits No The paper mentions training the model but does not explicitly provide specific percentages, absolute counts, or a clear methodology for train/validation/test dataset splits within the main experimental setup description. While hyperparameter tuning tables are present, the detailed validation split strategy is not explicitly stated.
Hardware Specification No The paper does not contain any specific details regarding the GPU models, CPU models, or other hardware specifications used to conduct the experiments.
Software Dependencies No The paper mentions various models and losses (e.g., 'residual MLP neural network', 'LSTM', 'mean squared error loss'), but it does not specify version numbers for any programming languages, libraries, or software packages used for implementation.
Experiment Setup Yes F. Experimental Hyperparameters. Here, we present the tuned hyperparameters for our methods and the comparative approaches across various datasets in our experiments from Table 13 to Table 35.