Neural graphical modelling in continuous-time: consistency guarantees and algorithms

Authors: Alexis Bellot, Kim Branson, Mihaela van der Schaar

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section makes performance comparisons on controlled experiments designed to analyzed 4 important challenges for graphical modelling with time series data: the irregularity of observation times, the sparsity of observation times, the non-linearity of dynamics, and the differing scale of processes in a system. We benchmark NGM against a variety of algorithms, namely: Three representative vector autoregression models: Neural Granger causality (Tank et al., 2018) in two instantiations, one based on feed forward neural networks (NGC-MLP) and one based on recurrent neural networks (NGC-LSTM), and the Structural Vector Autoregression Model (SVAM, (Hyvärinen et al., 2010), an extension of the Li NGAM algorithm to time series). A representative independence-based approach to structure learning with time series data: PCMCI (Runge et al., 2017), extending the PC algorithm. A representative two-stage collocation method we call Dynamic Causal Modelling (DCM) in which derivatives are first estimated on interpolations of the data and a penalized neural network is learned to infer G (extending the linear models of (Ramsay et al., 2007; Wu et al., 2014; Brunton et al., 2016)). Metric. We seek to recover the adjacency matrix of local dependencies G between the state of all variables and their variation. All experiments are repeated 100 times and we report mean and standard deviations of the false discovery rate (FDR) and true positive rate (TPR) in recovery performance of G.
Researcher Affiliation Collaboration Alexis Bellot Columbia University, USA ab5305@columbia.edu Kim Branson Glaxo Smith Klein, USA kim.m.branson@gsk.com Mihaela van der Schaar University of Cambridge, UK The Alan Turing Institute, UK University of California, Los Angeles, USA mv472@cam.ac.uk
Pseudocode No The paper has a section titled 'ALGORITHM: NEURAL GRAPHICAL MODELLING' but describes the steps in paragraph format rather than a structured pseudocode or algorithm block.
Open Source Code Yes Code associated with this work may be found at https://github.com/alexisbellot and at https://github.com/vanderschaarlab/mlforhealthlabpub.
Open Datasets No The paper uses simulated data generated from mathematical models like Lorenz's model, Rössler's model, and the yeast Glycolysis model, as described by citations such as (Lorenz, 1996), (Meyer et al., 1997), and (Daniels & Nemenman, 2015). It does not provide links, DOIs, or specific repository names for public datasets.
Dataset Splits Yes Regularizing constants are chosen from the set {0.001, 0.01, 0.05, 0.1, 0.5, 1, 2} with γ = 2 using average test errors from random train-test splits of the corresponding dataset. We chose the threshold for converting the weights to the presence / absence of edges in the graph based on F1 scores on validation data and was consistent (around 0.1) across datasets.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running experiments.
Software Dependencies No The paper mentions software like PyTorch and specific Python packages (tigramite, lingam) but does not provide version numbers for these software dependencies.
Experiment Setup Yes The integrand fθ was taken to be a feed-forward neural network as described with a single hidden layer of size 10 and elu activation functions after each layer except after the output layer. In each case we used the Adam optimiser as implemented by Py Torch. Starting learning rates varied between experiments (with values between 0.001 and 0.01) before being reduced by half if metrics failed to improve for a certain number of epochs. Regularizing constants are chosen from the set {0.001, 0.01, 0.05, 0.1, 0.5, 1, 2} with γ = 2.