Scale-teaching: Robust Multi-scale Training for Time Series Classification with Noisy Labels

Authors: Zhen Liu, ma peitian, Dongliang Chen, Wenbin Pei, Qianli Ma

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on multiple benchmark time series datasets demonstrate the superiority of the proposed Scale-teaching paradigm over state-of-the-art methods in terms of effectiveness and robustness.
Researcher Affiliation Academia Zhen Liu South China University of Technology Guangzhou, China cszhenliu@mail.scut.edu.cn Peitian Ma South China University of Technology Guangzhou, China ma_scuter@163.com Dongliang Chen South China University of Technology Guangzhou, China ytucdl@foxmail.com Wenbin Pei Dalian University of Technology Dalian, China peiwenbin@dlut.edu.cn Qianli Ma South China University of Technology Guangzhou, China qianlima@scut.edu.cn
Pseudocode Yes Please refer to Algorithm 1 in the Appendix for the specific pseudo-code of Scale-teaching.
Open Source Code Yes Our implementation of Scale-teaching is available at https://github.com/qianlima-lab/Scale-teaching.
Open Datasets Yes We use three time series benchmarks (four individual large datasets [3, 52, 53], UCR 128 archive [22], and UEA 30 archive [54]) for experiments. ... For detailed information about UCR datasets, please refer to https://www.cs.ucr.edu/~eamonn/time_series_data_ 2018/. ... For detailed information about UEA datasets, please refer to https: //www.timeseriesclassification.com/dataset.php.
Dataset Splits No Each UCR dataset includes a single training set and a single test set, and each time series sample has been z-normalized. ... Each dataset contains a partitioned training set and a test set. The paper mentions using the test set for evaluations and setting hyperparameters based on default settings from related works, rather than explicitly defining a separate validation split within their experimental setup.
Hardware Specification Yes Finally, we build our model using Py Torch 1.10 platform with 2 NVIDIA Ge Force RTX 3090 GPUs.
Software Dependencies Yes Finally, we build our model using Py Torch 1.10 platform with 2 NVIDIA Ge Force RTX 3090 GPUs.
Experiment Setup Yes The learning rate is set to 1e-3, the maximum batch size is set to 256, and the maximum epoch is set to 200. ewarm is set to 30 and eupdate is set to 90. α in Eq. 3 is set to 0.9, σ in Eq. 4 is set to 0.25, β in Eq. 5 is set to 0.99, the largest neighbor K is set to 10, and γ is set to 0.99. In addition, following the parameter settings suggested in [23], we linearly decay the learning rate to zero from the 80-th epoch to 200-th epoch.