Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

A Symmetric Relative-Error Loss Function for Intermittent Multiscale Signal Modelling

Authors: Sergio M. Vanegas Arias, Lasse Lensu, Fredy Ruiz Palacios

IJCAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The numerical properties of SMASPE are explored, and its performance is tested in two real-life cases for deterministic and stochastic optimization. The experiments show a clear advantage of the proposed loss function, with an improvement of up to 42% with respect to MAAPE in terms of Mean Absolute Error for deep learning models when appropriate bounds are selected.
Researcher Affiliation Academia Sergio M. Vanegas Arias1 , Lasse Lensu1 and Fredy Ruiz Palacios2 1Department of Computational Engineering, LUT University, Lappeenranta, Finland 2Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy EMAIL
Pseudocode No The paper describes methods and procedures in paragraph text and mathematical equations, but it does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code and complementary material is made available as a Git Hub repository at [Vanegas Arias, 2025]. https://github.com/sergiovaneg/SMASPE, 2025. Accessed: 2025-05-29.
Open Datasets Yes The experiments in this section were carried out using a Kaggle dataset [Veera, 2020] which, to the authors knowledge, is the only open-access dataset sampled with the same frequency (i.e., weekly) and containing approximately the same number of samples per sequence as the one used for the original MAAPE paper.
Dataset Splits Yes Following the methodology of the MAAPE proposers [Kim and Kim, 2016], the first 95 samples were used to fit the models and the remaining 8 to evaluate their out-of-sample performance. Missing samples in the dataset impose a natural division that was used to determine the training, validation, and test partitions, resulting in a 59.1/29.7/11.2% scheme.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU/CPU models or memory specifications.
Software Dependencies Yes The experiments were implemented in Python using the Jax [Bradbury et al., 2018] and Keras 3 [Chollet and others, 2015] stacks for ML model construction and training.
Experiment Setup Yes Considering the length of the dataset, and since the authors did not provide the model parameters, a (4, 1, 4) order ARIMA and a HW with seasonality 4 were fitted separately to each time series, initializing the ARIMA model weights to 0 and the convex sum coefficients of the HW method to 0.5. The chosen architecture was an encoder/decoder Recursive Neural Network (RNN) with Gated-Recurrent Unit (GRU) layers similar to the scheme proposed by [Cho, 2014] with its hyperparameters fixed (1737 weights in total) for 72-hour context and forecast windows. The weights were randomly initialized five times and used as common starting points for all loss functions considered in Section 4.1, keeping the weights that achieved the median validation loss. These were minimized using the Adam optimizer (Learning Rate of 5 10 4, 64 sequences per batch) for a maximum of 2000 epochs (500 patience epochs).