Defending Black-Box Skeleton-Based Human Activity Classifiers
Authors: He Wang, Yunfeng Diao, Zichang Tan, Guodong Guo
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments Experimental Settings We briefly introduce the experimental settings here, and the additional details are in Appendix D1. Datasets and Classifiers: We choose three widely adopted benchmark datasets in HAR: HDM05 (M uller et al. 2007), NTU60 (Shahroudy et al. 2016) and NTU120 (Liu et al. 2020b). For base classifiers, we employ four recent classifiers: ST-GCN (Yan, Xiong, and Lin 2018), CTRGCN (Chen et al. 2021), SGN (Zhang et al. 2020b) and MSG3D (Liu et al. 2020c). |
| Researcher Affiliation | Collaboration | He Wang* 1, Yunfeng Diao*2, Zichang Tan3, Guodong Guo3 1 University of Leeds, UK 2 Hefei University of Technology, Hefei China 3 Institute of Deep Learning, Baidu Research, Beijing China |
| Pseudocode | No | The paper mentions 'The mathematical derivations and algorithms for inference, with implementation details, are in Appendix B.' However, pseudocode or clearly labeled algorithm blocks are not present in the provided main paper text. |
| Open Source Code | Yes | Appendix and code are available. code at https://github.com/realcrane/Defending-Black-box-Skeletonbased-Human-Activity-Classifiers |
| Open Datasets | Yes | We choose three widely adopted benchmark datasets in HAR: HDM05 (M uller et al. 2007), NTU60 (Shahroudy et al. 2016) and NTU120 (Liu et al. 2020b). |
| Dataset Splits | No | The paper describes data pre-processing (sub-sampling frames, sliding window) and mentions using 'correctly classified testing samples' but does not explicitly provide training/validation/test dataset splits with percentages, counts, or references to predefined splits. |
| Hardware Specification | Yes | approximately 2 months on a Nvidia Titan GPU. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., programming language versions, library versions, or specific solver versions). |
| Experiment Setup | Yes | We employ perturbations budget ϵ = 0.005 for AT methods (Madry et al. 2018; Wang et al. 2020; Zhang et al. 2019b) and compare other ϵ settings. ... We use 20-iteration attack for training SMART-AT, TRADES and MART... We use five appended models in all experiments and explain the reason in the ablation study later. |