Con4m: Context-aware Consistency Learning Framework for Segmented Time Series Classification

Authors: Junru Chen, Tianyu Cao, Jing Xu, Jiahe Li, Zhilong Chen, Tao Xiao, YANG YANG

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments across multiple datasets validate the effectiveness of Con4m in handling segmented TSC tasks on MVD.
Researcher Affiliation Collaboration Junru Chen Zhejiang University jrchen_cali@zju.edu.cn Tianyu Cao Zhejiang University ty.cao@zju.edu.cn Jing Xu State Grid Power Supply Co. Ltd. ltxu1111@gmail.com Jiahe Li Zhejiang University jiaheli@zju.edu.cn Zhilong Chen Zhejiang University zhilongchen@zju.edu.cn Tao Xiao State Grid Power Supply Co. Ltd. xtxjtu@163.com Yang Yang Zhejiang University yangya@zju.edu.cn
Pseudocode No The paper does not contain any pseudocode or algorithm blocks. It describes the method in text and with diagrams.
Open Source Code Yes The source code is available at https://github.com/MrNobodyCali/Con4m.
Open Datasets Yes In this work, we use three public [31, 7, 37] and one private MVD data to measure the performance of models. Specifically, the Tufts f NIRS to Mental Workload [31] data (f NIRS)... The HHAR (Heterogeneity Human Activity Recognition) dataset [7]... The Sleep EDF [37] data (Sleep)...
Dataset Splits Yes We use cross-validation [39] to evaluate the model s generalization ability by partitioning the subjects in the data into non-overlapping subsets for training and testing. As shown in Table 1, for f NIRS and SEEG, we divide the subjects into 4 groups and follow the 2 training-1 validation-1 testing (2-1-1) setting to conduct experiments. We divide the HHAR and Sleep datasets into 3 groups and follow the 1-1-1 experimental setting.
Hardware Specification Yes And the model is trained on a workstation (Ubuntu system 20.04.5) with 2 CPUs (AMD EPYC 7H12 64-Core Processor) and 8 GPUs (NVIDIA Ge Force RTX 3090).
Software Dependencies Yes We build our model using Py Torch 2.0.0 [52] with CUDA 11.8.
Experiment Setup Yes We set d=128 and the dimension of intermediate representations in FFN module as 256 for all experiments. The number of heads and dropout rate are set as 8 and 0.1 respectively... The model is optimized using Adam optimizer [38] with a learning rate of 1e-3 and weight decay of 1e-4, and the batch size is set as 64.