Stable Neural Stochastic Differential Equations in Analyzing Irregular Time Series Data

Authors: YongKyung Oh, Dongyoung Lim, Sungil Kim

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To assess the effectiveness of our approach, we conduct extensive experiments on four benchmark datasets for interpolation, forecasting, and classification tasks, and analyze the robustness of our methods with 30 public datasets under different missing rates. Our results demonstrate the efficacy of the proposed method in handling real-world irregular time series data.
Researcher Affiliation Academia Ulsan National Institute of Science and Technology, Republic of Korea {yongkyungoh, dlim, sungil.kim}@unist.ac.kr
Pseudocode Yes Algorithm 1 Train procedure for classification task
Open Source Code Yes Code is available at https://github.com/yongkyung-oh/Stable-Neural-SDEs.
Open Datasets Yes The Physio Net Mortality dataset contains multivariate time series data from 37 variables of Intensive Care Unit (ICU) records... (Silva et al., 2012); The Mu Jo Co (Tassa et al., 2018) dataset; The Physio Net Sepsis (Reyna et al., 2019) dataset; The Speech Commands (Warden, 2018) dataset; 30 datasets from the University of East Anglia (UEA) and the University of California Riverside (UCR) Time Series Classification Repository (Bagnall et al., 2018).
Dataset Splits Yes The data was divided into train, validation, and test sets in a 0.70/0.15/0.15 ratio. [...] Divide training data into a train set Dtrain and a validation set Dval. [...] ceasing the training when the validation loss didn t improve for 10 successive epochs.
Hardware Specification Yes Our experiments were performed using a server on Ubuntu 22.04 LTS, equipped with an Intel(R) Xeon(R) Gold 6242 CPU and six NVIDIA A100 40GB GPUs.
Software Dependencies No The paper mentions using 'python library torchsde', 'Python library torchcde', and 'Python library ray' but does not specify their version numbers.
Experiment Setup Yes For the proposed methodology, the training process spans 300 epochs, employing a batch size of 64 and a learning rate of 0.001. Hyperparameter optimization is conducted through a grid search, focusing on the number of layers in vector field nl of {16, 32, 64, 128} and hidden vector dimensions nh of {16, 32, 64, 128}.