Adversarial Bone Length Attack on Action Recognition
Authors: Nariki Tanaka, Hiroshi Kera, Kazuhiko Kawamoto2335-2343
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted experiments on the NTU RGB+D and HDM05 datasets and demonstrate that the proposed attack successfully deceived models with sometimes greater than 90% success rate by small perturbations. |
| Researcher Affiliation | Academia | Nariki Tanaka,1 Hiroshi Kera,2 Kazuhiko Kawamoto,2 1 Graduate School of Science and Engineering, Chiba University 2 Graduate School of Engineering, Chiba University {ntanaka, kera}@chiba-u.jp, kawa@faculty.chiba-u.jp |
| Pseudocode | Yes | Algorithm 1: Pseudocode of adversarial bone length attack |
| Open Source Code | No | The paper provides links to the official code for the *target models* (ST-GCN and SGN) but not to the authors' own implementation of the proposed adversarial bone length attack methodology. |
| Open Datasets | Yes | We used the NTU RGB+D (Shahroudy et al. 2016) and HDM05 (M uller et al. 2007) datasets, which are 3D skeleton action datasets. |
| Dataset Splits | Yes | We randomly divided samples of each class into a training set (80%), validation set (10%), and testing set (10%). |
| Hardware Specification | Yes | All experiments were conducted using an Intel Core i7-6850K CPU and TITAN RTX GPU. |
| Software Dependencies | No | The paper mentions using Python and related libraries for deep learning (e.g., implied by ST-GCN, SGN, PyTorch/TensorFlow frameworks), but does not specify exact version numbers for any software dependencies. |
| Experiment Setup | Yes | The maximum number of iterations of the PGD was set to 50. The step size was set to α = 0.01, as in (Liu, Akhtar, and Mian 2020). |