Neural structure learning with stochastic differential equations
Authors: Benjie Wang, Joel Jennings, Wenbo Gong
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we demonstrate that our approach leads to improved structure learning performance on both synthetic and real-world datasets compared to relevant baselines under regular and irregular sampling intervals. |
| Researcher Affiliation | Collaboration | Benjie Wang University of California, Los Angeles Joel Jennings Deep Mind Wenbo Gong Microsoft Research Cambridge |
| Pseudocode | Yes | Algorithm 1 SCOTCH training |
| Open Source Code | Yes | 1https://github.com/microsoft/causica/tree/main/research_experiments/scotch |
| Open Datasets | Yes | First, we evaluate SCOTCH on synthetic benchmarks including the Lorenz-96 (Lorenz, 1996) and Glycolysis (Daniels & Nemenman, 2015) datasets... We also evaluate SCOTCH performance on the DREAM3 datasets (Prill et al., 2010; Marbach et al., 2009)... Netsim consists of blood oxygenation level dependent imaging data. Following the same setup as Gong et al. (2022), we use subjects 2-6 to form the dataset... |
| Dataset Splits | No | The paper evaluates performance using AUROC, F1 score, TPR, and FDR, but does not explicitly provide details about training, validation, or test dataset splits (e.g., percentages, sample counts, or predefined split citations) needed to reproduce data partitioning for typical supervised learning setups. For structure learning, they infer the graph from the given datasets. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies (e.g., libraries, frameworks, or solvers) used in the experiments. |
| Experiment Setup | No | The paper states, "Further experimental details can be found in Appendices B, C, D," indicating that specific hyperparameters and training configurations are not provided directly in the main text. |