Context-Aware Transfer Attacks for Object Detection

Authors: Zikui Cai, Xinxin Xie, Shasha Li, Mingjun Yin, Chengyu Song, Srikanth V. Krishnamurthy, Amit K. Roy-Chowdhury, M. Salman Asif149-157

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our approach on a variety of object detectors with images from PASCAL VOC and MS COCO datasets and demonstrate up to 20 percentage points improvement in performance compared to the other state-of-the-art methods. Experiments We perform comprehensive experiments on two large-scale object detection datasets to evaluate the proposed contextaware sequential attack strategy.
Researcher Affiliation Academia Zikui Cai1, Xinxin Xie2, Shasha Li2, Mingjun Yin2, Chengyu Song2, Srikanth V. Krishnamurthy2, Amit K. Roy-Chowdhury1, M. Salman Asif1 1 Electrical and Computer Engineering, University of California Riverside 2 Computer Science and Engineering, University of California Riverside
Pseudocode No The paper describes the attack generation process with equations and a framework overview figure, but it does not include a formal pseudocode or algorithm block.
Open Source Code No The paper states, "We use MMDetection (Chen et al. 2019) code repository for the aforementioned models." This refers to a third-party tool used by the authors, not the release of their own implementation's source code. There is no explicit statement or link indicating that the authors' own code is publicly available.
Open Datasets Yes We use images from both PASCAL VOC (Everingham et al. 2010) and MS COCO (Lin et al. 2014) datasets in our experiments.
Dataset Splits No The paper mentions using `voc2007test` and `coco2017val` for selecting 500 images for evaluation, and `voc2007trainval` and `coco2017train` for context graph construction. While it references standard dataset splits, it does not provide specific details on how these 500 images (or any other data) were explicitly partitioned into train/validation/test sets for their own experiments in terms of percentages, sample counts, or custom split files needed for reproduction.
Hardware Specification No The paper does not specify any hardware used for running the experiments, such as particular GPU or CPU models, memory, or cloud computing instances.
Software Dependencies No The paper mentions using "MMDetection (Chen et al. 2019) code repository" but does not provide a specific version number for MMDetection or any other software dependencies crucial for reproducibility.
Experiment Setup Yes We use I-FGSM-based method to generate a perturbation on the whole image (as discussed in Eqn. (1)), and we limit the maximum perturbation level to be L {10, 20, 30}. The number of helper objects is empirically chosen to be 5. where ϵ is the step size at each iteration. The weighting factor α is chosen such that the individual loss terms are balanced.