HOPE: High-order Graph ODE For Modeling Interacting Dynamics
Authors: Xiao Luo, Jingyang Yuan, Zijie Huang, Huiyu Jiang, Yifang Qin, Wei Ju, Ming Zhang, Yizhou Sun
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment results on a variety of datasets demonstrate both the effectiveness and efficiency of our proposed method. We undertake extensive experiments on three benchmark datasets to verify the efficacy of our proposed methods and the results show that HOPE achieves state-of-the-art performance over competing baselines in terms of both accuracy and efficiency. |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of California, Los Angeles, USA 2National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University 3Department of Statistics and Applied Probability, University of California, Santa Barbara, USA. |
| Pseudocode | Yes | Algorithm 1 Learning Algorithm of HOPE Input: Object trajectory sequence X, adjacency matrix sequence A Output: The parameters in both the encoder and the decoder. |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code for the described methodology, nor does it include a direct link to a code repository. |
| Open Datasets | Yes | To evaluate our model, we utilize three datasets of interacting dynamical systems, i.e., COVID-19 (Dong et al., 2020), Social Network (Gu et al., 2017) and Spring Ocsillator (Kipf et al., 2018). COVID-19 contains daily tendency records from the Johns Hopkins University (JHU) Center for Systems Science and Engineering 1. |
| Dataset Splits | Yes | We partition each training example into two segments based on the time and utilize the first part to predict the second part. [...] As a result, it is sufficient to ensure no overlapping between the training sample and the testing sample. To be specific, we split feature data in COVID-19, a 266-days time series, into a 233-day part and 31-day part. The training samples and validating samples are extracted from 233-day part, and testing samples are extracted from 31-day part. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'We adopt Pytorch (Paszke et al., 2017) and torchdiffeq (Kidger et al., 2021) for implementing all the baselines', but it does not specify concrete version numbers for these software dependencies (e.g., PyTorch 1.x.x, torchdiffeq 0.x.x). |
| Experiment Setup | Yes | The embedding dimension of hidden embeddings is set to 64. During training, an Adam optimizer is used with the learning rate set to 5e 3 and weight decay set to 1e 5. The dropout rate is set to 0.2. The batch size is set to 8 and we train the model for 100 epochs. |