Embodied Active Defense: Leveraging Recurrent Feedback to Counter Adversarial Patches
Authors: Lingxuan Wu, Xiao Yang, Yinpeng Dong, Liuwei XIE, Hang Su, Jun Zhu
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that EAD substantially enhances robustness against a variety of patches within just a few steps through its action policy in safety-critical tasks (e.g., face recognition and object detection), without compromising standard accuracy. |
| Researcher Affiliation | Collaboration | Lingxuan Wu1, Xiao Yang1 , Yinpeng Dong1,2, Liuwei Xie1, Hang Su1, Jun Zhu1,2 1 Dept. of Comp. Sci. and Tech., Institute for AI, Tsinghua-Bosch Joint ML Center, THBI Lab, BNRist Center, Tsinghua University, Beijing, 100084, China 2 Real AI |
| Pseudocode | Yes | Algorithm 1 Learning Embodied Active Defense |
| Open Source Code | No | The paper mentions using several open-source implementations for baselines and pre-trained models (e.g., EG3D, LGS, SAC, DOA, Arc Face, YOLOv5) but does not provide a specific link or statement for the open-source release of the Embodied Active Defense (EAD) methodology described in this paper. It mentions a dataset release is 'forthcoming'. |
| Open Datasets | Yes | We conduct our experiments on Celeb A-3D, which we utilize GAN inversion (Zhu et al., 2016) with EG3D (Chan et al., 2022) to reconstruct 2D face image from Celeb A into a 3D form. The Celeb A-3D dataset inherits annotations from the original Celeb A dataset, which is accessible at https://mmlab.ie.cuhk.edu.cn/projects/CelebA.html. The release of this dataset for public access is forthcoming. |
| Dataset Splits | No | The paper mentions using a 'training set' and 'test pairs'/'test scenes' for evaluation, but does not explicitly provide detailed train/validation/test dataset splits, such as specific percentages, sample counts for each split, or a dedicated validation set description for reproducibility. |
| Hardware Specification | Yes | The performance assessment is conducted on a NVIDIA GeForce RTX 3090 Ti and an AMD EPYC 7302 16-Core Processor, using a training batch size of 64. ... The offline training utilized 2 NVIDIA Tesla A100 GPUs for approximately 4 hours (210 minutes). ... the online training phase required 8 NVIDIA Tesla A100 GPUs and extended to about 14 hours (867 minutes). |
| Software Dependencies | Yes | We use the official implementation and pre-trained model checkpoints for both YOLOv5n and YOLOv5x at https://github.com/ultralytics/yolov5. ultralytics/yolov5: v5. 0-yolov5-p6 1280 models, aws, supervise. ly and youtube integrations. Zenodo, 2021. |
| Experiment Setup | Yes | To expedite EAD s learning of efficient policies requiring minimal perceptual steps, we configure the max horizon length τ = 4. ... Table 5: Hyper-parameters of EAD for face recognition ... Table 11: Hyper-Parameters of EAD for object detection |