EGODE: An Event-attended Graph ODE Framework for Modeling Rigid Dynamics
Authors: Jingyang Yuan, Gongbo Sun, Zhiping Xiao, Hang Zhou, Xiao Luo, Junyu Luo, Yusheng Zhao, Wei Ju, Ming Zhang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on a range of benchmark datasets validate the superiority of the proposed EGODE compared to various state-of-the-art baselines. |
| Researcher Affiliation | Academia | 1School of Computer Science, State Key Laboratory for Multimedia Information Processing, PKU-Anker LLM Lab, Peking University 2University of Wisconsin-Madison 3University of Washington 4University of California, Davis 5University of California, Los Angeles |
| Pseudocode | Yes | Algorithm 1 Updating Algorithm of EGODE |
| Open Source Code | Yes | The source code can be found at https://github.com/yuanjypku/EGODE. |
| Open Datasets | Yes | Our proposed model EGODE is evaluated on two physical dynamics datasets, i.e., Rigid Fall [30] and Physion [6]. |
| Dataset Splits | Yes | The batch size is set to 1 for Physion and 8 for Rigid Fall dataset. To ensure a fair comparison, we initialize all baseline models parameters based on corresponding papers and then fine-tune them to achieve the best results. We also employ an early stopping strategy of 10 epochs according to validation loss. |
| Hardware Specification | Yes | We conduct our experiments on a server with eight NVIDIA A40 GPUs. Since an Open GL interface and a monitor are required for the visualization process, we visualize our results using a local PC with a single NVIDIA 4090 GPU. |
| Software Dependencies | No | The paper mentions using 'Pytorch [45]', 'torchdiffeq [27]', and 'torch-geometric [11]' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | In our method, we initialize all MLP layers with a hidden size of 200. An Adam optimizer with the initial learning rate of 0.0001 is adopted during training. We also employ an early stopping strategy of 10 epochs according to validation loss. The batch size is set to 1 for Physion and 8 for Rigid Fall dataset. We train our model for 1000 epochs and an early stopping strategy of 10 epochs according to validation loss. |