Heuristic Black-Box Adversarial Attacks on Video Recognition Models
Authors: Zhipeng Wei, Jingjing Chen, Xingxing Wei, Linxi Jiang, Tat-Seng Chua, Fengfeng Zhou, Yu-Gang Jiang12338-12345
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results of attacking two mainstream video recognition methods on the UCF-101 dataset and the HMDB-51 dataset demonstrate that the proposed heuristic black-box adversarial attack method can significantly reduce the computation cost and lead to more than 28% reduction in query numbers for the untargeted attack on both datasets. |
| Researcher Affiliation | Academia | 1Jilin University, 2Fudan University, 3Beihang University, 4National University of Singapore, 5Health Informatics Lab, College of Computer Science and Technology, and Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China, 130012 |
| Pseudocode | Yes | Algorithm 1: Heuristic temporal selection algorithm for the targeted attack. Algorithm 2: Heuristic-based targeted attack algorithm. |
| Open Source Code | No | The paper does not provide any specific links to a code repository, nor does it state that the source code for the methodology is openly available or included in supplementary materials. |
| Open Datasets | Yes | Datasets. We consider two widely used datasets for video recognition: UCF-101 (Su et al. 2009) and HMDB-51 (Kuehne et al. 2011). |
| Dataset Splits | Yes | Both datasets split 70% of the videos as training set and the remaining 30% as test set. The parameter tuning is done on 30 videos that randomly sampled from the test set of UCF-101 and can be correctly classified by the target models. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments, only mentioning general model names. |
| Software Dependencies | No | The paper mentions software like Open CV but does not specify its version, nor does it list specific versions for other key software components or frameworks used in the experiments. |
| Experiment Setup | Yes | Following (Cheng et al. 2018), we set β = 0.005 in all experiments. Besides, we sample u from Gaussian distribution for 20 times to calculate their estimators and average them to get more stable ˆg. For ω, we set it as {0, 3, 6, 9, 12, 15, ∞} in the untargeted attack, as {0, 15, 30, 45, ∞} in the targeted attack... to strikes a balance between the MAP and temporal sparsity, we set ω = 3 in the untargeted attack to conduct subsequent experiments. We set ω = 30 in experiments of the targeted attack. Similarly, we perform grid search to decide the value of ϕ. For the untargeted attack, we fix ω = 3 and set parameter ϕ as {0.2, 0.4, 0.6, 0.8, 1.0} to evaluate the performance. we set ϕ as 0.6 for the untargeted attack. we set ϕ as 0.8 for the targeted attack. |