Federated Multi-Task Attention for Cross-Individual Human Activity Recognition

Authors: Qiang Shen, Haotian Feng, Rui Song, Stefano Teso, Fausto Giunchiglia, Hao Xu

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments based on publicly available HAR datasets, which are collected in both controlled environments and real-world scenarios. Numeric results verify that our proposed Fed MAT significantly outperforms baselines not only in generalizing to existing individuals but also in adapting to new individuals.
Researcher Affiliation Academia 1College of Computer Science and Technology, Jilin University 2School of Artificial Intelligence, Jilin University 3University of Trento {shenqiang19, fenght21, songrui20}@mails.jlu.edu.cn, {fausto.giunchiglia, stefano.teso}@unitn.it, xuhao@jlu.edu.cn
Pseudocode Yes Algorithm 1 Fed MAT. Input: m individual-specific data sets {Du}, one per client. Output: central model Θc, individual-specific models {Wu}.
Open Source Code Yes We open source Smart JLU dataset and source code on Github: https://github.com/Super-Shen/Fed MAT.
Open Datasets Yes HHAR [Stisen et al., 2015]: It contains 43, 930, 257 accelerometer and gyroscope recordings collected from 9 individuals performing 6 activities. PAMAP2 [Reiss and Stricker, 2012]: It contains 3, 850, 505 recordings from three inertial measurement units (IMUs) located on the hand, chest, and ankle. Extra Sensory [Vaizman et al., 2017]: It contains over 300, 000 instances labeled with 51 types of human contexts and collected in a natural environment from 60 individuals. Smart JLU:1 A similar dataset using the same tool and techniques as this one [Bison et al., 2021] collected in China, which contains over 30, 000 instances labeled with daily activities collected from 50 individuals, over two weeks in a real-life scenario in which participants are required to use their smartphones naturally. Footnote 1: We open source Smart JLU dataset and source code on Github: https://github.com/Super-Shen/Fed MAT.
Dataset Splits Yes For each user, we split the local dataset of each individual into a train set (80%) and a test set (20%). We apply the settings of meta-learning by splitting all the users in a dataset into meta-train users, which participate in the meta-learning process, and meta-test users for testing the meta-learned model. For Extra Sensory dataset, seven activities are selected and nine individuals are randomly selected as meta-train set, while one individual is selected as meta-test set. For Smart JLU, nine individuals are randomly selected as meta-train, while two individuals for meta-testing. ... We applied leave-one-individual-out validation.
Hardware Specification Yes All experiments are carried out on a machine with 2 NVIDIA Ge Force RTX 3090 GPUs.
Software Dependencies Yes We implemented Fed MAT using Python 3.6 and Pytorch 1.8.
Experiment Setup Yes The Adam optimizer with β1 = 0.9, β2 = 0.98, and ε = 10 8 is used to update all network parameters. For federated learning, we set λ = 1.0 and perform n = 10 epochs of local training at each update round.