Conditional Local Convolution for Spatio-Temporal Meteorological Forecasting
Authors: Haitao Lin, Zhangyang Gao, Yongjie Xu, Lirong Wu, Ling Li, Stan Z. Li7470-7478
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our model is evaluated on realworld weather benchmark datasets, achieving state-of-the-art performance with obvious improvements. We conduct further analysis on local pattern visualization, model s framework choice, advantages of horizon maps and etc. |
| Researcher Affiliation | Academia | Haitao Lin,*1 3 Zhangyang Gao,*1 3 Yongjie Xu, 1 3 Lirong Wu, 1 3 Ling Li, 2 Stan Z. Li 1 1 Center of Artiļ¬cial Intelligence for Research and Innovation, Westlake University 2 Eco-Environmental Research Laboratory, Westlake University 3 Zhejiang University linhaitao, gaozhangyang, xuyongjie, wulirong, liling, stan.zq.li@westlake.edu.cn |
| Pseudocode | No | The paper describes the method and architecture in text and figures (e.g., Figure 5) but does not contain a dedicated pseudocode or algorithm block. |
| Open Source Code | Yes | The source code is available at https://github.com/BIRD-TAO/CLCRN. |
| Open Datasets | Yes | The datasets used for performance evaluation are provided in Weather Bench (Rasp et al. 2020), with 2048 nodes on the earth sphere. |
| Dataset Splits | Yes | The hyper-parameters are chosen through a carefully tuning on the validation set (See Appendix D1 for more details). |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory specifications) used for experiments were mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., library names with versions) were explicitly mentioned in the paper. |
| Experiment Setup | Yes | All the models are trained with target function of MAE and optimized by Adam optimizer for a maximum of 100 epoches. The hyper-parameters are chosen through a carefully tuning on the validation set (See Appendix D1 for more details). The learning rate is set to 0.001 with decayed by 0.7 for every 5 epoches. The batch size is set to 64. The hidden size is 32. |