Neuro-Symbolic Temporal Point Processes

Authors: Yang Yang, Chao Yang, Boyang Li, Yinghao Fu, Shuang Li

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our approach showcases notable efficiency and accuracy across synthetic and real datasets, surpassing state-of-the-art baselines by a wide margin in terms of efficiency. and 6. Experiment 6.1. Synthetic Data Experiments 6.2. Real Data Experiments
Researcher Affiliation Academia 1School of Data Science, The Chinese University of Hong Kong (Shenzhen).
Pseudocode No The paper describes its algorithms in prose but does not include structured pseudocode or an algorithm block.
Open Source Code No The paper does not provide an explicit statement or link for open-source code availability for its methodology.
Open Datasets Yes Our research involved the study of two datasets: the Car Following dataset for assessing autonomous vehicle behavior, and the Low Urine dataset, which encompasses a wealth of medical records from ICU patients. and The Low Urine dataset, derived from the MIMIC-IV1, focuses on the electronic health records of 4074 ICU patients diagnosed with sepsis, capturing the physiological changes that occur leading up to the critical juncture of septic shock. and MIMIC-IV4 is a publicly available database sourced from the electronic health record of the Beth Israel Deaconess Medical Center (Johnson et al., 2023). The information available includes patient measurements, orders, diagnoses, procedures, treatments, and deidentified free-text clinical notes.
Dataset Splits No The paper discusses using different sample sizes (5000, 10000, 20000) for datasets and evaluates accuracy, but it does not specify explicit training, validation, or test dataset splits (e.g., percentages or exact counts for each).
Hardware Specification Yes For our proposed method, all experiments were conducted on a Linux server with an Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz and 30Gi of memory, running Ubuntu 20.04.5 LTS.
Software Dependencies Yes The coding environment utilized was Python 3.9.12, with Py Torch 2.0.1 serving as the primary machine-learning framework.
Experiment Setup No The paper describes the setup for synthetic data generation and general experimental methodology (e.g., number of runs, repetitions), but it does not provide specific details on hyperparameters (like learning rate, batch size, number of epochs) or system-level training settings for its model.