Learnable Path in Neural Controlled Differential Equations

Authors: Sheo Yon Jhin, Minju Jo, Seungji Kook, Noseong Park

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted time-series classification and forecasting experiments with four datasets and twelve baselines, which are all well-recognized standard benchmark environments. Our method shows the best performance in both time-series classification and forecasting.
Researcher Affiliation Academia Sheo Yon Jhin1*, Minju Jo1*, Seungji Kook1, Noseong Park1 1 Yonsei University, Seoul, South Korea, sheoyonj,alflsowl12,202132139,noseong@yonsei.ac.kr
Pseudocode Yes Algorithm 1: How to train LEAP
Open Source Code No The paper does not include an explicit statement about releasing its source code or provide a link to a code repository.
Open Datasets Yes Character Trajectories The Character Trajectories dataset from the UEA time-series classification archive (Bagnall et al. 2018), Speech Commands The Speech Commands dataset... (Warden 2018), Physio Net Sepsis (Reyna et al. 2019; Reiter 2005), Mu Jo Co This dataset was generated from 10,000 simulations... (Tassa et al. 2018).
Dataset Splits No Algorithm 1 mentions 'Training data Dtrain' and 'Validating data Dval', but the paper does not provide specific percentages, sample counts, or detailed methodology for these splits (e.g., '80/10/10 split' or cross-validation details) in the main text.
Hardware Specification Yes Our software and hardware environments are as follows: UBUNTU 18.04 LTS, PYTHON 3.7.6, NUMPY 1.20.3, SCIPY 1.7, MATPLOTLIB 3.3.1, CUDA 11.0, and NVIDIA Driver 417.22, i9 CPU, and NVIDIA RTX TITAN.
Software Dependencies Yes Our software and hardware environments are as follows: UBUNTU 18.04 LTS, PYTHON 3.7.6, NUMPY 1.20.3, SCIPY 1.7, MATPLOTLIB 3.3.1, CUDA 11.0, and NVIDIA Driver 417.22, i9 CPU, and NVIDIA RTX TITAN.
Experiment Setup Yes Hyperparameters We list all the hyperparameter settings in Appendix. We repeat the training and testing procedures with five different random seeds and report their mean and standard deviation of evaluation metrics.