Stochastic Neural Simulator for Generalizing Dynamical Systems across Environments

Authors: Liu Jiaqi, Jiaxu Cui, Jiayi Yang, Bo Yang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Intensive experiments are conducted on five complex dynamical systems in various fields. Results show that the proposed Co NDP can achieve optimal results compared with common neural simulators and state-of-the-art cross-environmental models.
Researcher Affiliation Academia Jiaqi Liu1,2 , Jiaxu Cui1,2 , Jiayi Yang1,2 and Bo Yang1,2 1Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, China 2College of Computer Science and Technology, Jilin University, China
Pseudocode No The paper describes the model (Co NDP) and its components in detail using textual descriptions and mathematical equations, but it does not include a dedicated pseudocode or algorithm block.
Open Source Code Yes 2Our code is available at https://github.com/ljqjlu/Co NDP.
Open Datasets No For the training set, we sample N env tr environments per system and N tra tr trajectories per environment. In addition, we sample N tra tu (up to 10) and N tra te ( 100) trajectories per new environment for tuning and testing, respectively. A large number of environments from each system are collected as testing environments. ... Additional details regarding the dynamical systems and experimental settings can be found in the supplementary material.
Dataset Splits Yes For the training set, we sample N env tr environments per system and N tra tr trajectories per environment. In addition, we sample N tra tu (up to 10) and N tra te ( 100) trajectories per new environment for tuning and testing, respectively.
Hardware Specification No The paper describes the experimental setup and results but does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to conduct the experiments.
Software Dependencies No The paper states 'Detailed implementations, such as neural network architectures, can be found in supplementary material,' but it does not list any specific software dependencies or their version numbers (e.g., Python, PyTorch, TensorFlow versions) in the main body of the paper.
Experiment Setup Yes Empirically, we observed that model performance was already satisfactory when the context size was around 30. Thus, the size of the context was set to 30 in our experiments.