Towards Universal Physical Attacks on Single Object Tracking

Authors: Li Ding, Yongwei Wang, Kaiwen Yuan, Minyang Jiang, Ping Wang, Hua Huang, Z. Jane Wang1236-1245

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show the effectiveness of the physically feasible attacks on Siam Mask and Siam RPN++ visual trackers both in digital and physical scenes. In this section, we empirically evaluate the effectiveness of the proposed attacks on visual tracking both in digital and physically feasible scenes.
Researcher Affiliation Academia Li Ding1,2*, Yongwei Wang2*, Kaiwen Yuan2, Minyang Jiang2, Ping Wang1 , Hua Huang3 , Z. Jane Wang2 1School of Information and Communications Engineering, Xi an Jiaotong University, 2Department of Electrical and Computer Engineering, University of British Columbia, 3School of Artificial Intelligence, Beijing Normal University {dinglijay,yongweiw,kaiwen, minyang, zjanew}@ece.ubc.ca, ping.fu@xjtu.edu.cn, huahuang@bnu.edu.cn
Pseudocode Yes Algorithm 1: The proposed algorithm of universal and physically feasible attacks on visual tracking.
Open Source Code No The paper does not contain an explicit statement about open-sourcing the code or a link to a code repository for the methodology described.
Open Datasets Yes For the physically feasible attacks in digital scenes, we experimented on three object categories: person, car and cup from the Large-scale Single Object Tracking (La SOT) dataset (Fan et al. 2019).
Dataset Splits No The paper states it randomly selects one video for adversarial patch generation and attacks the rest 19 videos, implicitly defining a test set, but it does not specify a separate validation split or its percentage/counts.
Hardware Specification Yes The experiments were conducted on one NVIDIA RTX-2080 Ti GPU card using Py Torch (Paszke et al. 2019).
Software Dependencies No The paper mentions 'Py Torch' but does not specify a version number for it or any other software dependency.
Experiment Setup Yes In all experiments, we keep the patch and object size ratio within 20% to be physically feasible. For parameters in the overall loss expression in Eq.(6), we set D = 3, and the loss weights are set respectively as: α = 1000, β = 1, γ = 0.1. In the Shape loss in Eq.(4), we set K = 20. More concretely, for the shrinking attack, we set h = 1, w = 1, mτ = 0.7; and for the dilation attack, we use h = 1, w = 1, mτ = 0.7. We employ the Adam optimizer from the Py Torch platform with hyperparameters: exponential decays β1 = 0.9, β2 = 0.999, learning rate lr = 10 (for intensity between [0,255]), weight decay set as 0, the batchsize set as 20, and the maximum training epochs M = 300.