CasCast: Skillful High-resolution Precipitation Nowcasting via Cascaded Modelling

Authors: Junchao Gong, Lei Bai, Peng Ye, Wanghan Xu, Na Liu, Jianhua Dai, Xiaokang Yang, Wanli Ouyang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three benchmark radar precipitation datasets show that Cas Cast achieves competitive performance.
Researcher Affiliation Collaboration 1Shanghai Jiao Tong University 2Shanghai AI Laboratory 3National Meteorological Information Center 4Shanghai Meteorological Service.
Pseudocode No The paper does not include pseudocode or an algorithm block.
Open Source Code Yes The code is available at https://github.com/ Open Earth Lab/Cas Cast.
Open Datasets Yes To validate the ability of Cas Cast to generate skillful 1kmresolution precipitation, we conducted tests on three radar echo datasets including SEVIR (Veillette et al., 2020), HKO7 (Shi et al., 2017) and Meteo Net (Gwennaelle et al., 2020).
Dataset Splits Yes We follow (Gao et al., 2022a) to split SEVIR into 35718 training samples, 9060 validation samples, and 12159 test samples. ... HKO-7 by predicting the future radar echo up to 60 minutes (10 frames) given 60-minute observation (10 frames), resulting in 8772 training samples, 492 validation samples, and 1152 test samples.
Hardware Specification Yes The training of diffusion takes 200k steps (18 hours) on 4 A100s with a global batchsize of 32.
Software Dependencies No The paper mentions using 'Adam W optimizer' and 'DDIM (Song et al., 2020)' but does not provide specific version numbers for any software components or libraries.
Experiment Setup Yes We follow (Gao et al., 2022a) to train the deterministic models. The training of autoencoder is the same as (Rombach et al., 2022) except that we utilize the Adam W optimizer with the highest learning rate of 1e-4 and a cosine learning schedule. As for settings of diffusion, we apply the linear noise schedule with 1000 diffusion steps and 20 denoising steps for inference with DDIM (Song et al., 2020). Classifier free guidance (Ho & Salimans, 2022) is adopted for training and inference. In the diffusion part, our Cas Former is optimized by Adam W optimizer with a learning rate of 5e-4 and a cosine learning rate scheduler. The training of diffusion takes 200k steps (18 hours) on 4 A100s with a global batchsize of 32.