Directed Cyclic Graph for Causal Discovery from Multivariate Functional Data

Authors: Saptarshi Roy, Raymond K. W. Wong, Yang Ni

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate the superior performance of our method over existing methods in terms of causal graph estimation through extensive simulation studies. We also demonstrate the proposed method using a brain EEG dataset.
Researcher Affiliation Academia Saptarshi Roy Department of Statistics Texas A&M University College Station, TX 77843 roys8001@stat.tamu.edu Raymond K. W. Wong Department of Statistics Texas A&M University College Station, TX 77843 raywong@tamu.edu Yang Ni Department of Statistics Texas A&M University College Station, TX 77843 yni@tamu.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks, nor does it include clearly labeled algorithm sections or code-like formatted procedures.
Open Source Code No Code will be made available on the project s website on Github.
Open Datasets Yes We demonstrate the proposed FENCE model on a brain EEG dataset from an alcoholism study [Zhang et al., 1995].
Dataset Splits No The paper describes simulation data generation parameters (n, p, d) and MCMC burn-in iterations but does not explicitly provide training/validation/test dataset splits or cross-validation details for empirical evaluation.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions software packages like 'fdapace package in R', 'pcalg package in R', 'py-tetrad package in python', and 'eeglab toolbox of Matlab' but does not specify their version numbers.
Experiment Setup Yes For the implementation of the proposed FENCE, we fixed the number of mixture components to be 10 and ran MCMC for 5,000 iterations (discarding the first 2,000 iterations as burn-in and retaining every 5th iteration after burn-in).