Revisiting Adversarial Patches for Designing Camera-Agnostic Attacks against Person Detection
Authors: Hui Wei, Zhixiang Wang, Kewei Zhang, Jiaqi Hou, Yuanwei Liu, Hao Tang, Zheng Wang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our proposed Camera-Agnostic Patch (CAP) attack effectively conceals persons from detectors across various imaging hardware, including two distinct cameras and four smartphones. |
| Researcher Affiliation | Academia | 1National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University 2The University of Tokyo 3School of Computer Science, Peking University |
| Pseudocode | Yes | Algorithm 1 The proposed adversarial optimization ( Attacker and Defender) |
| Open Source Code | No | The source code will be made available upon acceptance of the paper. |
| Open Datasets | Yes | We use the INRIAPERSON dataset [4, 30] to evaluate digital-space attacks. ... YOLOv5 [17] model pre-trained on the COCO dataset [20] |
| Dataset Splits | No | The paper mentions 613 training images and 288 test images for the INRIAPerson dataset but does not explicitly specify a validation set split. |
| Hardware Specification | Yes | Our implementation utilizes PyTorch on a Linux server equipped with dual NVIDIA GeForce RTX 3090 GPUs. |
| Software Dependencies | No | Our implementation utilizes PyTorch. However, no specific version number for PyTorch or any other software dependency is provided. |
| Experiment Setup | Yes | The adversarial patches are configured with dimensions of 300 × 300, and we employ a YOLOv5 [17] model pre-trained on the COCO dataset [20] and subsequently fine-tuned on INRIAPerson [30] as our victim detector. The detector processes input images at a resolution of 640 × 640, and adversarial training proceeds for 100 epochs. |