Attacking Video Recognition Models with Bullet-Screen Comments
Authors: Kai Chen, Zhipeng Wei, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang312-320
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments to verify the effectiveness of the proposed method. On both UCF-101 and HMDB-51 datasets, our BSC attack method can achieve about 90% fooling rate when attacking three mainstream video recognition models, while only occluding <8% areas in the video. |
| Researcher Affiliation | Academia | Kai Chen, Zhipeng Wei, Jingjing Chen Zuxuan Wu, Yu-Gang Jiang Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University Shanghai Collaborative Innovation Center on Intelligent Visual Computing {kaichen20, chenjingjing, zxwu, ygj}@fudan.edu.cn, zpwei21@m.fudan.edu.cn |
| Pseudocode | Yes | Algorithm 1: Adversarial BSC attack |
| Open Source Code | Yes | Our code is available at https://github.com/kay-ck/BSC-attack. |
| Open Datasets | Yes | We consider two popular benchmark datasets for video recognition: UCF-101 (Su et al. 2009) and HMDB-51 (Kuehne et al. 2011). |
| Dataset Splits | Yes | Both datasets split 70% of the videos as training set and the remaining 30% as test set. |
| Hardware Specification | Yes | Our approach is implemented on a workstation with four GPUs of NVIDIA Ge Force RTX 2080 Ti. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies such as programming languages, libraries, or frameworks (e.g., PyTorch, TensorFlow, Python version). |
| Experiment Setup | Yes | To strike a balance between FR, AOA and AQN, we set m = 4 and h = 9 to conduct subsequent experiments. ... Therefore, we set λ = 1e 3 so that adversarial BSC attack can achieve the highest FR (%) and the least AQN. ... we set T = Deja V u Serif to achieve the best attack performance for the adversarial BSC attack. ... We optimize the parameters via Adam with a learning rate of 0.03. |