Task-oriented Time Series Imputation Evaluation via Generalized Representers

Authors: Zhixian Wang, Linxiao Yang, Liang Sun, Qingsong Wen, Yi Wang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate our method, we conduct experiments on six datasets: the GEF load forecasting competition dataset with the corresponding temperature [39], the UCI dataset (electricity load and air quality) [40], the Traffic dataset containing road occupancy rates2, and two transformer datasets, ETTH1 and ETTH2 [41].
Researcher Affiliation Collaboration Zhixian Wang1,2, Linxiao Yang2, Liang Sun2, Qingsong Wen2, Yi Wang1 1The University of Hong Kong, 2DAMO Academy, Alibaba Group
Pseudocode Yes Algorithm 1: Task-oriented Imputation Emsemble.
Open Source Code Yes The corresponding code can be found in the repository https://github.com/hkuedl/Task-Oriented-Imputation. [...] We have provide the public accessible link to both the code and dataset we used for experiment.
Open Datasets Yes To validate our method, we conduct experiments on six datasets: the GEF load forecasting competition dataset with the corresponding temperature [39], the UCI dataset (electricity load and air quality) [40], the Traffic dataset containing road occupancy rates2, and two transformer datasets, ETTH1 and ETTH2 [41]. [...] We have provide the public accessible link to both the code and dataset we used for experiment.
Dataset Splits Yes Table 5: Datasets used in the forecasting task. [Provides Training, Validation, Test date ranges for each dataset]
Hardware Specification Yes We use 1 NVIDIA GTX 4090 GPU with 24GB of memory for all our experiments.
Software Dependencies No The paper mentions software like 'torch.SGD optimizer [53]' and 'Py POTS [45]', but does not specify their version numbers or the version of PyTorch.
Experiment Setup Yes In our main experiment, we set the downstream task as univariate time series forecasting, with both input sequence and prediction lengths set to 24. [...] We mainly consider 40% missing rates as our main experimental setup. [...] We use the torch.SGD optimizer [53] to optimize the parameters of the model, where the learning rate is set to 0.1. The maximum epochs for each training are 300, and the patience is set to 10. [...] The hyperparameters for each method are set as shown in Table 6.