Predicting Complex Activities from Ongoing Multivariate Time Series

Authors: Weihao Cheng, Sarah Erfani, Rui Zhang, Ramamohanarao Kotagiri

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct evaluations on a realworld CA dataset consisting of a rich amount of sensor data, and the results show that Sim RAD outperforms state-of-the-art methods by average 7.2% in prediction accuracy with high confidence.
Researcher Affiliation Academia Weihao Cheng, Sarah Erfani, Rui Zhang, Kotagiri Ramamohanarao School of Computing Information and System, The University of Melbourne {weihaoc@student., sarah.erfani@, rui.zhang@, kotagiri@}unimelb.edu.au
Pseudocode Yes Algorithm 1 Sim RAD Training
Open Source Code No The paper does not provide any statements about open-sourcing their code or include a link to a code repository for the described methodology.
Open Datasets Yes The experiments are conducted on Opportunity (OPP) dataset [Roggen et al., 2010].
Dataset Splits Yes We conduct experiments based on 4-fold cross-validation. We consider that the amount of testing data is greater than the training data as CAs can be performed in various individual ways in real-world. Therefore, we use the 1/4 instances regarding one subject for training and the other 3/4 for testing.
Hardware Specification No The paper only mentions the operating system: 'The experiments are conducted on a 64-bit Ubuntu 14.04 LTS operating system.', without providing specific hardware details like CPU or GPU models.
Software Dependencies No The experimental scripts are written in Python 2.7 with the use of Scikit-learn [Pedregosa et al., 2011] and Keras [Chollet and others, 2015] packages. While Python 2.7 is mentioned, specific version numbers for Scikit-learn and Keras are not provided.
Experiment Setup Yes Settings of Sim RAD: For the action sequence model (ASM), we describe the detailed settings of feature learner G from bottom to top as follows: The input layer uses window size w = 120 for channels of locomotion actions, and uses the half size of w for channels of left/right-hand actions as hand actions are shorter than locomotion actions; The FC layers of each channel output 60-dim vectors; The Max-Pooling layer down-samples the concatenated vector by a scale of 2; The last FC layer outputs a 256-dim vector. We train the ASM with batch size of 10 and training epoch of 50. For the complex activity model (CAM), we set the feature φ(A) as Temporal Patterns of 1-pattern [Liu et al., 2015], and we use quadratic penalty weight λt = (t/T)2 for 1 t T. We train Sim RAD with learning rounds L = 10.