Operator-Learning-Inspired Modeling of Neural Ordinary Differential Equations
Authors: Woojin Cho, Seunghyeon Cho, Hyundong Jin, Jinsung Jeon, Kookjin Lee, Sanghyun Hong, Dongeun Lee, Jonghyun Choi, Noseong Park
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments with general downstream tasks, our method significantly outperforms existing methods. and For empirical evaluations, we test our method on various ML downstream tasks including image classification, time series classification, and image generation. |
| Researcher Affiliation | Academia | 1Yonsei University 2Arizona State University 3Oregon State University 4Texas A&M University-Commerce |
| Pseudocode | No | The paper describes the overall workflow and proposed method using text and equations (Figure 1 and Figure 2), but it does not include a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper lists software environments but does not provide an explicit statement about releasing its own source code or a direct link to a code repository. |
| Open Datasets | Yes | Datasets: We test baselines and our model with the following four image classification benchmarks: MNIST (Le Cun, Cortes, and Burges 2010), CIFAR-10 (Krizhevsky, Hinton et al. 2009), CIFAR-100 (Krizhevsky, Hinton et al. 2009), and STL-10 (Coates, Ng, and Lee 2011). and Human Activity (Kaluˇza et al. 2010) and Physionet (Silva et al. 2010) benchmark datasets are used to train and evaluate models for time series classification. |
| Dataset Splits | Yes | For four image classification benchmarks: MNIST (Le Cun, Cortes, and Burges 2010), CIFAR-10 (Krizhevsky, Hinton et al. 2009), CIFAR-100 (Krizhevsky, Hinton et al. 2009), and STL-10 (Coates, Ng, and Lee 2011). These are standard benchmark datasets with well-defined splits for training, validation, and testing. Also for Time Series Classification: Human Activity (Kaluˇza et al. 2010) and Physionet (Silva et al. 2010). |
| Hardware Specification | Yes | Our software and hardware environments are as follows: UBUNTU 18.04 LTS, PYTHON 3.6, TORCHDIFFEQ, PYTORCH 1.10.2, CUDA 11.4, i9 CPU, and NVIDIA RTX A6000. |
| Software Dependencies | Yes | Our software and hardware environments are as follows: UBUNTU 18.04 LTS, PYTHON 3.6, TORCHDIFFEQ, PYTORCH 1.10.2, CUDA 11.4, i9 CPU, and NVIDIA RTX A6000. |
| Experiment Setup | Yes | We use a learning rate of 0.001 and a batch size of 64. and For all NODE-based baselines, we use three convolutional layers to define their ODE functions and for our method, we use two BFNO layers (N = 2) and two kernels (L = 2). and Adam optimizer with a learning rate of 0.001, we train on a single GPU with a batch size of 200 for 100 epochs. |