MGTCF: Multi-Generator Tropical Cyclone Forecasting with Heterogeneous Meteorological Data

Authors: Cheng Huang, Cong Bai, Sixian Chan, Jinglin Zhang, YuQuan Wu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To prove the effectiveness of MGTCF, we conduct extensive experiments on the China Meteorological Administration Tropical Cyclone Best Track Dataset. MGTCF obtains better performance compared with other deep learning methods and outperforms the official prediction method of the China Central Meteorological Observatory in most indexes.
Researcher Affiliation Academia 1College of Computer Science, Zhejiang University of Technology 2Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province 3KLME, CIC-FEMD, Nanjing University of Information Science & Technology 4 School of Control Science and Engineering, Shangdong University 5 Institute of Software Chinese Academy of Sciences
Pseudocode No The paper describes the model architecture and mathematical formulations for its components and loss functions, but it does not include a distinct pseudocode block or algorithm.
Open Source Code Yes We will also open our codes on Github at https://github.com/Zjut-Multimedia Plus/MGTCF.
Open Datasets Yes Data1d we used were from the CMA-BST (Ying et al. 2014). ...Data2d we used, geopotential height (GPH), were from the fifth-generation atmospheric reanalysis of the global climate (ERA5) (ECMWF 2022)
Dataset Splits Yes Of the data from 1950 to 2016, 80% were used for the training set and 20% were used for the validation set. The data for the years 2017 to 2019 were regarded as the test set.
Hardware Specification Yes We implemented MGTCF on the Py Torch platform and ran it on an NVIDIA RTX A6000 GPU.
Software Dependencies No The paper mentions 'Py Torch platform' but does not specify a version number. It also mentions 'Adam' optimizer but no other specific software dependencies with versions.
Experiment Setup Yes We used Adam... to optimize our model with an initial learning rate of 0.0001. We trained MGTCF with a batch size of 96 and 100+q epochs. The hyperparameter q was 2. The number of generators K and the sampling number l were set as 6.