Self-adaptive Extreme Penalized Loss for Imbalanced Time Series Prediction
Authors: Yiyang Wang, Yuchen Han, Yuhan Guo
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on real-world datasets demonstrate the superiority of our framework compared to other state-of-the-art approaches for both time series prediction and block maxima prediction tasks. |
| Researcher Affiliation | Academia | 1 College of Artificial Intelligence, Dalian Maritime University 2College of Transportation Engineering, Dalian Maritime University |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | Code will be available at https://github.com/Ldiper/EPL. |
| Open Datasets | Yes | The wave height datasets are acquired from the National Marine Data Center2 (NMDC) and the Kaggle competition3. The dataset employed for wind speed prediction is acquired from the Kaggle competition4, which includes hourly data collected from April 2006 to December 2016. Two air quality datasets are acquired from the UCI repository, involving related data for two distinct locations: Beijing5 and Italy6. |
| Dataset Splits | No | The paper states: "The entire time series dataset is partitioned into training and test sets in accordance with chronological order, in a roughly 7 : 3 ratio." It does not explicitly provide details about a validation set split. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or their version numbers. |
| Experiment Setup | Yes | For all the experiments, a 3-layer LSTM with 128 nodes of layer widths is consistently utilized across different datasets. Besides, for model parameter settings, we set the learning rate to 0.002, the batch size to 256, the dropout rate to 0.2, the maximum epochs for training to 200, and we added an early stop mechanism to prevent model over-fitting. |