CALANet: Cheap All-Layer Aggregation for Human Activity Recognition

Authors: Jaegyun Park, Dae-Won Kim, Jaesung Lee

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluated on seven publicly available datasets, CALANet outperformed existing methods, achieving state-of-the-art performance.In this section, we evaluate the superiority of CALANet. In Section 4.1, we describe the experimental setup. Section 4.2 presents the compared results of CALANet and other networks on seven HAR datasets. Section 4.3 provides an in-depth analysis via an ablation study. Lastly, Section 4.4 measures the actual inference time of CALANet.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Chung-Ang University, Republic of Korea 2Department of Artificial Intelligence, Chung-Ang University, Republic of Korea
Pseudocode No The paper describes the network architecture and theoretical derivations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes The source codes of the CALANet are publicly available at https://github.com/jgpark92/CALANet.
Open Datasets Yes We used seven public benchmark datasets, including various sampling frequencies, the number of activities, and sensors. They include UCI-HAR [1], Uni Mi B-SHAR [30], DSADS [3], OPPORTUNITY [6], KU-HAR [47], PAMAP2 [46], and REALDISP [2]. The details for each dataset are described in Appendix F.
Dataset Splits No For each dataset, the paper specifies train and test splits (e.g., '70% and 30% of the dataset were used as the training and test sets, respectively' for UCI-HAR) but does not explicitly mention a separate validation dataset split.
Hardware Specification Yes To estimate the actual response time of our CALANet, we used the AMD Ryzen 7 5800X 8-Core Processor without the support of graphics processing units.
Software Dependencies No The paper mentions 'PyTorch [38]' and 'Adam optimizer [26]' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes Precisely, they were trained for 300 epochs with a batch size of 128 using a 2080Ti graphics-processing unit. We used the Adam optimizer [26] with β1 = 0.9, β2 = 0.999 and ϵ = 10-8, where the learning rate and weight decay were set to 0.0005. For the CALALet, we set Dk, N, and L to 5, 128, and 9, respectively.