MHCCL: Masked Hierarchical Cluster-Wise Contrastive Learning for Multivariate Time Series
Authors: Qianwen Meng, Hangwei Qian, Yong Liu, Lizhen Cui, Yonghui Xu, Zhiqi Shen
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experimental evaluations on seven widely-used multivariate time series datasets. The results demonstrate the superiority of MHCCL over the state-of-the-art approaches for unsupervised time series representation learning. |
| Researcher Affiliation | Academia | 1School of Software, Shandong University, Jinan, China 2Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan, China 3Lund University, Sweden 4School of Computer Science and Engineering, Nanyang Technological University, Singapore |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/mqwfrog/MHCCL. |
| Open Datasets | Yes | We select 7 widely-used multivariate time series datasets from popular archives after taking into consideration of the application fields, length of sequences, number of instances, etc. The datasets include SHAR (Micucci, Mobilio, and Napoletano 2016), Epilepsy (Andrzejak et al. 2001), WISDM (Kwapisz, Weiss, and Moore 2010), HAR from UCI2 archive, and Pen Digits, EW (Eigen Worms), FM (Finger Movements) from the UEA3 archive (Bagnall et al. 2018). |
| Dataset Splits | Yes | We split the data into 80% and 20% for training and testing, and use 20% of the training data for validation in datasets except those from UEA archive which go through a pre-defined train-test split. |
| Hardware Specification | Yes | All models are implemented with Py Torch and the experimental evaluations are conducted on an NVIDIA Ge Force RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The batch size B is 128 by default and is reduced to 16 or 32 for small datasets. Stochastic Gradient Descent (SGD) is adopted as the optimizer and each model is trained for 200 epochs. |