ControlSynth Neural ODEs: Modeling Dynamical Systems with Guaranteed Convergence

Authors: Wenjie Mei, Dongzhe Zheng, Shihua Li

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we compare several representative NNs with CSODEs on important physical dynamics under the inductive biases of CSODEs, and illustrate that CSODEs have better learning and predictive abilities in these settings.
Researcher Affiliation Academia Wenjie Mei and Shihua Li are with the School of Automation and the Key Laboratory of MCCSE of the Ministry of Education, Southeast University, Nanjing, China. Dongzhe Zheng is with the Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China.
Pseudocode No The paper does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available online at https://github.com/Continuum Coder/Control Synth-Neural-ODE.
Open Datasets Yes We conduct 1000 simulations... We divide these data points into 50 sequences, each containing 30 time points. Our goal is to use these sequences to train an NN to predict the dynamic changes over the next 10 seconds, specifically the sequences of the next 30 time points.
Dataset Splits Yes The scatter plot visualizes the performance trajectory of CSODE models during training, where each point represents the training loss (x-axis) and validation loss (y-axis) at a specific epoch.
Hardware Specification Yes All experiments in this study are conducted on a system equipped with an NVIDIA Ge Force RTX 3080 GPU and CUDA, ensuring a consistent computational environment across all tests.
Software Dependencies No The paper mentions 'CUDA' and 'Sedumi solver' but does not provide specific version numbers for these or any other software dependencies like deep learning frameworks (e.g., PyTorch, TensorFlow) or specific library versions used in the experiments.
Experiment Setup Yes Each table includes configurations such as physics parameters, the MLP network structure for function f( ), optimizer, loss function, learning rate, and training epochs, among other parameters.