Action2Activity: Recognizing Complex Activities from Sensor Data

Authors: Ye Liu, Liqiang Nie, Lei Han, Luming Zhang, David S. Rosenblum

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on a real-world dataset demonstrate the effectiveness of our work.
Researcher Affiliation Academia School of Computing, National University of Singapore Department of Computer Science, Hong Kong Baptist University
Pseudocode No The paper describes algorithms textually and mathematically but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions using LIBSVM and OpenCV, providing links to these third-party libraries, but does not state that the authors' own implementation code for the described methodology is publicly available.
Open Datasets Yes The Opportunity dataset [Chavarriaga et al., 2013]
Dataset Splits Yes The performance reported in this paper was measured based on 10-fold cross-validation classification accuracy.
Hardware Specification No The paper does not specify the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No We implemented this method with the help of LIBSVM2. We selected a linear kernel. ... We employed the k-Nearest Neighbors in Open CV3 and set K = 7.
Experiment Setup Yes For a MTL, we set minsup = 0.01 and twin = 2 Lavg over all the experiments, where Lavg is the average length of action intervals in an activity. ... We initially fixed λ and θ, and then varied γ from 0.001 to 5 and doubled the value at each step. ... We then set γ = 0.001, θ = 1 and varied λ. ... Finally, we set γ = 0.001, λ = 0.05 and varied θ.