BadPart: Unified Black-box Adversarial Patch Attacks against Pixel-wise Regression Tasks

Authors: Zhiyuan Cheng, Zhaoyi Liu, Tengda Guo, Shiwei Feng, Dongfang Liu, Mingjie Tang, Xiangyu Zhang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our attack prototype, named BADPART, is evaluated on both MDE and OFE tasks, utilizing a total of 7 models. BADPART surpasses 3 baseline methods in terms of both attack performance and efficiency. We also apply BADPART on the Google online service for portrait depth estimation, causing 43.5% relative distance error with 50K queries. 5. Evaluation In this section, we evaluate BADPART on 2 kinds of tasks including 7 subject models.
Researcher Affiliation Academia 1Department of Computer Science, Purdue University, West Lafayette, USA 2College of Computer Science, Sichuan University, Chengdu, China 3Department of Computer Engineering, Rochester Institute of Technology, Rochester, USA.
Pseudocode Yes Alg. 1 describes the proposed universal adversarial patch generation framework. Algorithm 1 Square-based patch generation framework. Alg. 2 describes the probabilistic square area sampling algorithm. Algorithm 2 Probabilistic square area sampling. Details of the algorithm can be found in Alg. 3. Algorithm 3 Score-based square-area gradient estimation.
Open Source Code Yes The source code is available at https://github.com/Bob-cheng/Bad Part.
Open Datasets Yes MDE models were trained on the KITTI depth prediction dataset (Uhrig et al., 2017) and OFE models were trained on the KITTI flow 2015 (Menze et al., 2015). We use 100 scenes from KITTI flow dataset as our training set and another 5 scenes as the validation set during patch generation.
Dataset Splits Yes We use 100 scenes from KITTI flow dataset as our training set and another 5 scenes as the validation set during patch generation. (i.e., m equals 100 and n equals 5 in Alg. 1.)
Hardware Specification Yes Adversarial patches are generated utilizing a single GPU (Nvidia RTX A6000) equipped with a memory capacity of 48G, in conjunction with an Intel Xeon Silver 4214R CPU.
Software Dependencies No The paper mentions the use of an 'Adam optimizer' and deep learning models, but it does not specify version numbers for any software components, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes We use 100 scenes from KITTI flow dataset as our training set and another 5 scenes as the validation set during patch generation. (i.e., m equals 100 and n equals 5 in Alg. 1.) We establish the initial square area as 2.5% of the patch area, and the pre-defined square size schedule (Algorithm 2 line 4) is set at 100, 500, 1500, 3000, 5000, 10000 for a maximum of 10000 iterations. The square area is halved once the iteration index reaches the pre-defined steps. The initial noise bound α (Algorithm 1 line 7) and noise decay factor γ (Algorithm 1 line 23) are set to 0.1 and 0.98 respectively. The initialization period K (Algorithm 2 line 5) is 100 iterations. We adopt an Adam optimizer with the learning rate of 0.1, and set 0.5 for both β1 and β2. Other hyper-parameters are discussed in the ablation studies, in which we use b = 20, T1 = 1 and T2 = 1 as the default settings.