Coherence-based Label Propagation over Time Series for Accelerated Active Learning

Authors: Yooju Shin, Susik Yoon, Sundong Kim, Hwanjun Song, Jae-Gil Lee, Byung Suk Lee

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments with various active learning settings to test the following hypotheses. TCLP accelerates active learning methods faster than other label propagation methods can. TCLP achieves both high accuracy and wide coverage in segment estimation. TCLP overcomes the label sparsity by the extensions discussed in Section 3.2.2. ... Datasets: The four benchmark datasets summarized in Table 1 are used. ... Accuracy metrics: Timestamp accuracy and segmental F1 score are measured at each round by five-fold cross validation
Researcher Affiliation Collaboration Yooju Shin1, Susik Yoon2, Sundong Kim3, Hwanjun Song4, Jae-Gil Lee1, , Byung Suk Lee5 1KAIST 2UIUC 3Institute for Basic Science 4NAVER AI Lab 5University of Vermont
Pseudocode Yes Algorithm 1 Time-series active learning with TCLP
Open Source Code Yes The source code is uploaded on https://github.com/ kaist-dmlab/TCLP, which contains the source code for active learning (main algorithm) and data preprocessing as well as detailed instructions.
Open Datasets Yes Datasets: The four benchmark datasets summarized in Table 1 are used. 50Salads contains videos at 30 frames per second that capture 25 people preparing a salad (Stein & Mc Kenna, 2013), and GTEA contains 15 frame videos of four people (Fathi et al., 2011). ... m Health contains... (Banos et al., 2014); ... HAPT represents... (Anguita et al., 2013). ... All datasets used in this paper are available in public websites and have been already anonymized, using random numeric identifiers to indicate different subjects to preserve privacy; in addition, these datasets have been extensively cited for the studies in human activity recognition and video segmentation. ... We download all datasets used in this paper from public websites, as specified in the corresponding references.
Dataset Splits Yes Timestamp accuracy and segmental F1 score are measured at each round by five-fold cross validation
Hardware Specification Yes For instance, according to the experiment for the 50salads dataset conducted using Intel Xeon Gold 6226R and Nvidia RTX3080, fitting the plateau models took only about 1 to 2 minutes, whereas training the classifier took about half an hour per active learning round.
Software Dependencies No The paper mentions software components and techniques such as 'multi-stage temporal convolutional network (MSTCN)', 'temperature scaling', and 'k-MEANS++', but it does not specify any version numbers for programming languages, libraries, or frameworks (e.g., Python 3.x, PyTorch 1.x, TensorFlow 2.x) used in the implementation.
Experiment Setup Yes Regarding active learning hyperparameters, the number of queried data points per round (b) and the number of active learning rounds (R) are summarized in Table 1. For TCLP, we use the initial parameters for plateau models (w0 and s0) in Table 1 and temperature scaling with T = 2.