From Fourier to Neural ODEs: Flow Matching for Modeling Complex Systems
Authors: Xin Li, Jingdong Zhang, Qunxi Zhu, Chengli Zhao, Xue Zhang, Xiaojun Duan, Wei Lin
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Consequently, our approach outperforms state-of-the-art methods in terms of training time, dynamics prediction, and robustness. Finally, we demonstrate the superior performance of our framework using a number of representative complex systems. In this section, we present a thorough analysis of our framework based on experiments conducted using a computational setup with 64GB RAM and an NVIDIA Tesla V100 GPU equipped with 16GB memory. Table 1. Comparison of the prediction MSE and the training time of baseline methods under different training set sizes and noise intensities. |
| Researcher Affiliation | Academia | 1College of Science, National University of Defense Technology, Changsha, Hunan 410073, China. 2School of Mathematical Sciences, LMNS, and SCMS, Fudan University, China. 3Research Institute of Intelligent Complex Systems, Fudan University, China. 4State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Institutes of Brain Science, Fudan University, China. 5Shanghai Artificial Intelligence Laboratory, China. Correspondence to: Qunxi Zhu <qxzhu16@fudan.edu.cn>, Chengli Zhao <chenglizhao@nudt.edu.cn>. |
| Pseudocode | Yes | The entire framework is illustrated in Fig. 1, and we provide a detailed execution process in Algorithm 1 in Appendix A.2. |
| Open Source Code | No | No explicit statement or link for open-source code was found in the paper. |
| Open Datasets | Yes | To explore the potential applicability of the method presented in this paper in real-world systems, we conducted preliminary experiments in the time series data of polar motion (the data can be accessed via https: //www.iers.org/IERS/EN/Data Products/ Earth Orientation Data/eop.html). Additionally, we also incorporate four Effective Angular Momentum (EAM) functions as features, namely: (a) Atmospheric Angular Momentum (AAM); (b) Hydrological Angular Momentum (HAM); (c) Oceanic Angular Momentum (OAM); and (d) Sea-Level Angular Momentum (SLAM), and these data can be accessed via http://rz-vm115.gfz-potsdam.de: 8080/repository. |
| Dataset Splits | Yes | We generate a dataset comprising 100 training samples, 30 validation samples, and 50 testing samples, with µ = 0, c = 100, l = 0.1, and random initial values s0. We partition the data as follows: 70% for training, 10% for validation, and the remaining 20% for testing. |
| Hardware Specification | Yes | In this section, we present a thorough analysis of our framework based on experiments conducted using a computational setup with 64GB RAM and an NVIDIA Tesla V100 GPU equipped with 16GB memory. |
| Software Dependencies | No | The paper mentions using “Adam optimizer” but does not provide specific version numbers for any software libraries, packages, or other dependencies. |
| Experiment Setup | Yes | First, we construct a multi-layer fully connected neural network to model the underlying dynamics f, and use the neural network to predict the vector field of the dynamical system from a dynamical perspective. The neural network consists of four hyperparameters: the input dimension di, the hidden layer dimension dh, the number of hidden layers lh, and the output dimension do. Regarding the training data, it comprises three hyperparameters: the number of training samples Ntr, the number of time domain sampling points N, and the number of spatial domain sampling points Nx and/or Ny. For Fourier analysis, it is necessary to consider the truncation frequency K. For the training process, In Gaussian random field sampling, we consider the output scale c, mean µ, and length scale l. Therefore, unless otherwise specified, the hyperparameter settings for our experiments are shown in Table S1. During the training process, we set the learning rate to 0.001, weight decay to 1e-5, and employed the Adam optimizer for training. |