Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics
Authors: Sungyong Seo*, Chuizheng Meng*, Yan Liu
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the superiority of PA-DGN in the approximation of directional derivatives and the prediction of graph signals on the synthetic data and the real-world climate observations from weather stations. We verify that PA-DGN is effective in approximating directional derivatives and predicting graph signals in synthetic data. Then, we conduct exhaustive experiments to predict climate observations from land-based weather stations and demonstrate that PA-DGN outperforms other baselines. |
| Researcher Affiliation | Academia | Sungyong Seo , Chuizheng Meng , Yan Liu Department of Computer Science University of Southern California {sungyons,chuizhem,yanliu.cs}@usc.edu |
| Pseudocode | No | The paper describes algorithms and models in text and mathematical equations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | We sample the weather stations located in the United States from the Online Climate Data Directory of the National Oceanic and Atmospheric Administration (NOAA) and choose the stations which have actively measured meteorological observations during 2015. We tested our proposed method and baselines on the NEMO sea surface temperature (SST) dataset. Available at http://marine.copernicus.eu/services-portfolio/access-to-products/?option=com_csw&view= details&product_id=GLOBAL_ANALYSIS_FORECAST_PHY_001_024. |
| Dataset Splits | Yes | The 1-year sequential data are split into the train set (8 months), the validation set (2 months), and the test set (2 months), respectively. |
| Hardware Specification | Yes | All experiments are implemented with Python3.6 and Py Torch 1.1.0, and are run with NVIDIA GTX 1080 Ti GPUs. |
| Software Dependencies | Yes | All experiments are implemented with Python3.6 and Py Torch 1.1.0, and are run with NVIDIA GTX 1080 Ti GPUs. |
| Experiment Setup | Yes | Training hyper-parameters We use Adam optimizer with learning rate 1e-3, batch size 8, and weight decay of 5e-4. All experiments are trained for a maximum of 2000 epochs with early stopping. All experiments are trained using inverse sigmoid scheduled sampling with the coefficient k = 107. |