Context Consistency Regularization for Label Sparsity in Time Series

Authors: Yooju Shin, Susik Yoon, Hwanjun Song, Dongmin Park, Byunghyun Kim, Jae-Gil Lee, Byung Suk Lee

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that the proposed framework outperforms the existing state-of-the-art consistency regularization frameworks through comprehensive experiments on real-world time-series datasets.
Researcher Affiliation Collaboration 1Graduate School of Data Science, KAIST, Korea 2Department of Computer Science, University of Illinois at Urbana-Champaign, USA 3AWS AI Labs, USA 4School of Computing, KAIST, Korea 5Department of Computer Science, University of Vermont, USA.
Pseudocode Yes Algorithm 1 describes how Cross Match works in time-series consistency regularization.
Open Source Code Yes The source code is provided at https://github.com/ kaist-dmlab/Cross Match.
Open Datasets Yes We use three widely-used benchmark datasets in Table 1. HAPT is a sensor time-series dataset tracking human movements in a laboratory sampled with the frequency of 50Hz (Anguita et al., 2013). m Health is a similar action recognition dataset recorded with more wearable sensors, such as 3D accelerometers, 3D gyroscopes, 3D magnetometers, and electrodes, whose sampling frequency is 50Hz (Banos et al., 2014). Opportunity is a collection of sensor recordings at 100Hz capturing daily natural human activities with wearable, object, and ambient sensors (Roggen et al., 2010).
Dataset Splits Yes We measure timestamp accuracy and segmental F1 score with five-fold cross validation and report the average value with standard deviation of five runs.
Hardware Specification Yes For every experiment, we use Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz and NVIDIA RTX 3090.
Software Dependencies No The paper mentions using 'MS-TCN as the backbone sequential classifier' and 'SGD optimizer', but does not specify version numbers for these or other software dependencies (e.g., Python, PyTorch/TensorFlow versions).
Experiment Setup Yes For Cross Match, we set the confidence threshold τ to 0.95 and the weight of the unlabeled loss λ to 1. The model is first trained without any pseudo-labels, i.e., only using the labeled batches. We start to update a model with pseudo-labels after the number of pseudo-labels in each class for a batch is balanced. Formally, this condition is satisfied when the entropy of the numbers of pseudo-labels per class is above 0.99 for the last 100 iterations... Table 6. Training hyperparameters. Stage: 4, Layer: 11, BL: 4, BU: 8, Optimizer: SGD, Momentum: 0.9, Nesterov: True, η: 0.005, Scheduling: cos(7πi/2I).