Correlative Channel-Aware Fusion for Multi-View Time Series Classification

Authors: Yue Bai, Lichen Wang, Zhiqiang Tao, Sheng Li, Yun Fu6714-6722

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on three real-world datasets demonstrate the superiority of our C2AF over the state-of-the-art methods. A detailed ablation study is also provided to illustrate the indispensability of each model component.
Researcher Affiliation Academia 1 Department of Electrical and Computer Engineering, Northeastern University, Boston, USA 2 Department of Computer Science and Engineering, Santa Clara University, Santa Clara, USA 3 Department of Computer Science, University of Georgia, Athens, USA
Pseudocode Yes Algorithm 1 The procedure of training C2AF algorithm.
Open Source Code No Code will be released at https://github.com/yueb17/C2AF
Open Datasets Yes EV-Action (Wang et al. 2019) is a multi-view human action dataset... NTU RGB+D (Shahroudy et al. 2016) is a large-scale dataset for multi-view action recognition... UCI Daily and Sports Activities (Asuncion and Newman 2007) is a multivariate time series dataset...
Dataset Splits Yes We choose the first 40 subjects for training and the rest 13 subjects for test. (EV-Action)... We use the cross-subject benchmark provided by the original dataset paper, which contains 40320 samples for training and 16560 samples for test. (NTU RGB+D)
Hardware Specification No Our model is implemented using Tensorflow with GPU acceleration. (No specific GPU model or other hardware details are provided.)
Software Dependencies No Our model is implemented using Tensorflow with GPU acceleration. (Tensorflow is mentioned, but no specific version number or other software dependencies with versions are listed.)
Experiment Setup Yes We set 128 as batch size. The Adam optimizer (Kingma and Ba 2014) is utilized for optimization and the learning rates are set as 0.0001 for all the view-specific and final classifiers synchronously.