Reinforcement Learning Based Sparse Black-box Adversarial Attack on Video Recognition Models

Authors: Zeyuan Wang, Chaofeng Sha, Su Yang

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A range of empirical results on real datasets demonstrate the effectiveness and efficiency of the proposed method.
Researcher Affiliation Academia Zeyuan Wang , Chaofeng Sha and Su Yang, Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University {18210440020, cfsha, suyang}@fudan.edu.cn
Pseudocode Yes Algorithm 1 Policy optimization for selecting key frames; Algorithm 2 Reinforcement learning based sparse targeted video attack
Open Source Code No The paper does not provide an explicit statement about the release of source code or a link to a code repository.
Open Datasets Yes Similar to [Wei et al., 2020], we use UCF101 [Soomro et al., 2012] and HMDB-51 [Kuehne et al., 2011] in our experiments.
Dataset Splits Yes We use 20 randomly sampled videos from the original training set of UCF-101 as validation set to do the experiment.
Hardware Specification Yes The computing infrastructures used for running experiments include 8 Nvidia Ge Force RTX 2080 Ti GPUs, Intel Xeon E52680 v4 CPU, 320 GB memory and Ubuntu 16.04.1.
Software Dependencies No The paper mentions 'Ubuntu 16.04.1' as part of the computing infrastructure but does not list specific versions of programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow) used in the experiments.
Experiment Setup Yes In our experiments, we set the hyperparameters ϵp = 0.2, γ = 0.99 and λ = 0.95. ... We set n = 100 in all of the following experiments. ... The iteration of attacking steps will terminate either if MAP reaches the bound or the iteration number exceeds the maximum iteration number Tai = 1000.