LORD: Lower-Dimensional Embedding of Log-Signature in Neural Rough Differential Equations
Authors: JAEHOON LEE, Jinsung Jeon, Sheo yon Jhin, Jihyeon Hyeong, Jayoung Kim, Minju Jo, Kook Seungji, Noseong Park
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments with benchmark datasets, the improvement ratio by our method is up to 75% in terms of various classification and forecasting evaluation metrics. |
| Researcher Affiliation | Academia | Jaehoon Lee, Jinsung Jeon, Sheoyon Jhin, Jihyeon Hyeong, Jayoung Kim, Minju Jo, Seungji Kook, and Noseong Park Yonsei University Seoul, South Korea {jaehoonlee,jjsjjs0902,sheoyonj,jiji.hyeong,jayoung.kim, alflsowl12,2021321393,noseong}@yonsei.ac.kr |
| Pseudocode | Yes | Algorithm 1: How to train LORD-NRDE |
| Open Source Code | Yes | Our code is available in https://github.com/leejaehoon2016/LORD. |
| Open Datasets | Yes | We use six real-word dataset which all contain very long time-series samples. There are 3 classification datasets in the University of East Anglia (UEA) repository (Tan & Webb): Eigen Worms, Counter Movement Jump, and Self Regulation SCP2, and 3 forecasting datasets in Beth Israel Deaconess Medical Centre (BIDMC) which come from the TSR archive (Tan & Webb): BIDMCHR, BIDMCRR, BIDMCSp O2. We refer to Appendix B for detail of datasets. |
| Dataset Splits | Yes | Input: Training data Dtrain, Validating data Dval, Maximum iteration numbers max iter AE and max iter T ASK |
| Hardware Specification | Yes | Our software and hardware environments are as follows: UBUNTU 18.04 LTS, PYTHON 3.7.10, PYTORCH 1.8.1, CUDA 11.4, and NVIDIA Driver 470.42.01, i9 CPU, and NVIDIA RTX A6000. |
| Software Dependencies | Yes | Our software and hardware environments are as follows: UBUNTU 18.04 LTS, PYTHON 3.7.10, PYTORCH 1.8.1, CUDA 11.4, and NVIDIA Driver 470.42.01, i9 CPU, and NVIDIA RTX A6000. |
| Experiment Setup | Yes | The number of layers in the encoder, decoder and main NRDE, Ng, Nf, and No of Eqs. 9 to 11, are in {2, 3}. The hidden sizes, hg, hf, and ho of Eqs. 9 to 11, are in {32, 64, 128, 192}. The coefficients of the L2 regularizers in Eqs. 13 and 14 are in {1 10 5, 1 10 6}. The coefficient of the embedding regularizer, ce in Eq. 13 is in {0, 1, 10}. The max iteration numbers, max iter AE and max iter T ASK in Alg. 1, are in {400, 500, 1000, 1500, 2000}. The learning rate of the pre-training and main-training is 1 10 3. |