CARE: Modeling Interacting Dynamics Under Temporal Environmental Variation
Authors: Xiao Luo, Haixin Wang, Zijie Huang, Huiyu Jiang, Abhijeet Gangan, Song Jiang, Yizhou Sun
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments on four datasets demonstrate the effectiveness of our proposed CARE compared with several state-of-the-art approaches. |
| Researcher Affiliation | Academia | 1University of California, Los Angeles, 2Peking University, 3University of California, Santa Barbara |
| Pseudocode | Yes | E Algorithm The whole learning algorithm of CARE is summarized in Algorithm 1. |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code for the described methodology, nor does it include any links to a code repository. |
| Open Datasets | Yes | We evaluate our proposed CARE on two particle-based simulation datasets with temporal environmental variations, i.e., Lennard-Jones Potential [47] and 3-body Stillinger-Weber Potential [4]. ... We employ two popular mesh-based simulation datasets, i.e., Cylinder Flow, and Airfoil. Cylinder Flow ... by Open Foam [22]. Airfoil is generated in a similar manner ... by Open Foam [22]. |
| Dataset Splits | Yes | To ensure the accuracy of our results, we use a rigorous data split strategy, where first 80% of the samples are reserved for training purposes and the remaining 10% are set aside for testing and validating, separately. |
| Hardware Specification | Yes | All experiments are conducted on a single NVIDIA A100 GPU. |
| Software Dependencies | No | We employ the fourth-order Runge-Kutta method as in the torchdiffeq Python package [24], using Py Torch [40]. While software is mentioned, specific version numbers for PyTorch and torchdiffeq are not provided. |
| Experiment Setup | Yes | We set the latent dimension to 256 and the dropout rate to 0.2. For optimization, we use the Adam optimizer with weight decay by mini-batch stochastic gradient descent, setting the learning rate to 0.01. |