Identifying Spatio-Temporal Drivers of Extreme Events
Authors: Mohamad Hakam Shams Eddin, Jürgen Gall
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach on three newly created synthetic benchmarks, where two of them are based on remote sensing or reanalysis climate data, and on two real-world reanalysis datasets. [...] Our evaluation shows that our approach outperforms approaches for interpretable forecasting, spatio-temporal anomaly detection, out-of-distribution detection, and multiple instance learning. Furthermore, we conduct empirical studies on two real-world reanalysis climate data. |
| Researcher Affiliation | Academia | Mohamad Hakam Shams Eddin Juergen Gall Institute of Computer Science, University of Bonn Lamarr Institute for Machine Learning and Artificial Intelligence {shams, gall}@iai.uni-bonn.de |
| Pseudocode | No | The paper describes the model components and training process in text and mathematical equations, but it does not include a dedicated 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | The source code and datasets are publicly available at the project page https://hakamshams.github.io/IDE. |
| Open Datasets | Yes | The source code and datasets are publicly available at the project page https://hakamshams.github.io/IDE. [...] We conducted the experiments on two real-world reanalysis datasets; CERRA reanalysis [106] and ERA5-Land [107]. [...] The pre-processed data used in this study are available at https://doi.org/10.60507/FK2/RD9E33 [139]. |
| Dataset Splits | Yes | More details regarding the variables and the domains along with the training/validation/test splits are provided in the Appendix Sec. H and Tables 20 and 21. [...] We synthesize overall 46 years of data; 34 years for training, 6 subsequent years for validation and the last 6 years for testing. |
| Hardware Specification | Yes | The training was done mainly on clusters with NVIDIA A100 80GB and NVIDIA A40 48GB GPUs. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer [129]' and 'PyTorch Captum [128]' but does not provide specific version numbers for these or other key software components like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | Setup and implementation details. We set the hidden dimension K to 16 by default. The temporal resolution is T = 6 for the synthetic data and T = 8 for real-world data. [...] For the synthetic data we set the embedding dimension K = 16. We use one layer Video Swin Transformer with {depth=[2, 1], heads=[2, 2], window size=[[2, 4, 4], [6, 1, 1]]}. [...] We set \u03bb(ent) = \u03bb(div) = 0.1, \u03bb(anomaly) = 100, and \u03bb(commit) = 3. The models were trained with Adam optimizer [129] for 100 epochs with a batch size of 4. We use a linear warm up of 2 epochs and a cosine decay with an initial learning rate of 2 \u00d7 10\u207b\u00b3 and a weight decay of 3 \u00d7 10\u207b\u00b3. |