Do We Need an Encoder-Decoder to Model Dynamical Systems on Networks?
Authors: Bing Liu, Wei Luo, Gang Li, Jing Huang, Bo Yang
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments, we verify that the proposed model can reliably recover a broad class of dynamics on different network topologies from time series data. |
| Researcher Affiliation | Academia | 1College of Computer Science and Technology, Jilin University, China 2Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, China 3School of Information Technology, Deakin University, Geelong, Australia |
| Pseudocode | No | The paper describes the model equations and training process but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper mentions accessing the code for the baseline NDCN model ('https: //github.com/calvin-zcx/ndcn') but does not provide concrete access or an explicit statement about the availability of the source code for their own proposed Dy-Net Neural Dynamics (DNND) model. |
| Open Datasets | No | The paper describes simulating its own dataset based on heat equations and other ODEs ('We simulate a vector time series using a heat equation defined on a grid network.' and 'From initial conditions, we simulate a time series as the observation data.'), but does not provide concrete access information (link, DOI, repository, or formal citation) for this generated dataset to be publicly available or open. |
| Dataset Splits | No | The paper mentions training data ('randomly sample 80 (irregularly spaced) times 0 t1 < t2 < < t80 5 for training.') and evaluation periods ('evaluated at random times in three periods: interpolation [0-5], short-term [5-6], and long-term [40-50]') which serve as testing, but does not explicitly provide information on a separate validation dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | We created 80 irregularly spaced samples with t [0, 5], and fitted an NDCN model using the code provided by the authors on Git Hub (...). [...] The initial conditions x(0) of the dynamical variables on vertices are set with random values from [0, 25]. We use the Runge Kutta Fehlberg (RKF45) solver to generate x(t1), x(t2), . . . , x(t80) as the training time series. [...] We devise a warm-up schedule to adapt the loss function dynamically. At the early stage of training, the model parameters are strongly affected by time-series data points closer to the initial value, which results in smaller integration errors. More specifically, we adopt a weighting schedule for the loss function with evolving weight function wk at epoch k as follows: wk(t) = e t/τk, where τk monotonically increases with the number of training epochs k. |