RainNet: A Large-Scale Imagery Dataset and Benchmark for Spatial Precipitation Downscaling

Authors: Xuanhong Chen, Kairui Feng, Naiyuan Liu, Bingbing Ni, Yifan Lu, Zhengyan Tong, Ziang Liu

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct spatial precipitation downscaling experiments to illustrate the application of our proposed Rain Net and evaluate the effectiveness of the benchmark downscaling frameworks.
Researcher Affiliation Academia 1Shanghai Jiao Tong University 2Princeton University 3University of Technology Sydney
Pseudocode No The paper provides a pipeline diagram (Figure 3) and describes the model's architecture, but it does not include any formal pseudocode or algorithm blocks.
Open Source Code Yes Our dataset is available at https://neuralchen.github.io/Rain Net/. For ease of use, we provide packages in our open source project that directly compute these metrics.
Open Datasets Yes To alleviate these obstacles, we propose the first large-scale spatial precipitation downscaling dataset named Rain Net, which contains more than 62, 400 pairs of high-quality low/high-resolution precipitation maps for over 17 years...Our dataset is available at https://neuralchen.github.io/Rain Net/. The proposed dataset can be used under the Creative Common License (Attribution CC BY).
Dataset Splits Yes Following the mainstream evaluation protocol of DL/ML communities, cross-validation is employed. In detail, we divide the dataset into 17 parts (2002.7 2002.11, 2003.7 2003.11, ..., 2018.7 2018.11) by year, and sequentially employ each year as the test set and the remaining 16 years as the training set, that is, 17-fold cross-validation.
Hardware Specification Yes We use 4 NVIDIA 2080 Ti GPUs for training.
Software Dependencies No The paper mentions 'Pytorch' but does not specify a version number or any other software dependencies with their versions.
Experiment Setup Yes We train all models with following setting. The batch size is set as 24. Precipitation maps are random crop into 64 64. We employ the Adam optimizer, beta1 is 0.9, and beta2 is 0.99. The initial learning rate is 0.001, which is reduced to 1/10 every 50 epochs, and a total of 200 epochs are trained.